Feb 13 19:04:33.900322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:04:33.900343 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:04:33.900353 kernel: KASLR enabled Feb 13 19:04:33.900359 kernel: efi: EFI v2.7 by EDK II Feb 13 19:04:33.900364 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 19:04:33.900370 kernel: random: crng init done Feb 13 19:04:33.900377 kernel: secureboot: Secure boot disabled Feb 13 19:04:33.900382 kernel: ACPI: Early table checksum verification disabled Feb 13 19:04:33.900388 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:04:33.900396 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:04:33.900402 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900407 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900413 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900419 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900426 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900434 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900440 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900447 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900453 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:04:33.900459 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:04:33.900471 kernel: NUMA: Failed to initialise from firmware Feb 13 19:04:33.900477 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:04:33.900483 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:04:33.900489 kernel: Zone ranges: Feb 13 19:04:33.900495 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:04:33.900503 kernel: DMA32 empty Feb 13 19:04:33.900508 kernel: Normal empty Feb 13 19:04:33.900515 kernel: Movable zone start for each node Feb 13 19:04:33.900521 kernel: Early memory node ranges Feb 13 19:04:33.900527 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:04:33.900533 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:04:33.900539 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:04:33.900546 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:04:33.900552 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:04:33.900558 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:04:33.900564 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:04:33.900570 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:04:33.900578 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:04:33.900584 kernel: psci: probing for conduit method from ACPI. Feb 13 19:04:33.900590 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:04:33.900599 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:04:33.900606 kernel: psci: Trusted OS migration not required Feb 13 19:04:33.900612 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:04:33.900620 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:04:33.900627 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:04:33.900633 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:04:33.900640 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:04:33.900647 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:04:33.900653 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:04:33.900660 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:04:33.900666 kernel: CPU features: detected: Spectre-v4 Feb 13 19:04:33.900679 kernel: CPU features: detected: Spectre-BHB Feb 13 19:04:33.900687 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:04:33.900695 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:04:33.900702 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:04:33.900709 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:04:33.900715 kernel: alternatives: applying boot alternatives Feb 13 19:04:33.900723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:04:33.900730 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:04:33.900737 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:04:33.900743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:04:33.900750 kernel: Fallback order for Node 0: 0 Feb 13 19:04:33.900771 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:04:33.900783 kernel: Policy zone: DMA Feb 13 19:04:33.900811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:04:33.900818 kernel: software IO TLB: area num 4. Feb 13 19:04:33.900825 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:04:33.900832 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Feb 13 19:04:33.900839 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:04:33.900845 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:04:33.900852 kernel: rcu: RCU event tracing is enabled. Feb 13 19:04:33.900859 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:04:33.900866 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:04:33.900872 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:04:33.900879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:04:33.900885 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:04:33.900893 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:04:33.900900 kernel: GICv3: 256 SPIs implemented Feb 13 19:04:33.900906 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:04:33.900913 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:04:33.900919 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:04:33.900926 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:04:33.900932 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:04:33.900939 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:04:33.900946 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:04:33.900952 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:04:33.900959 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:04:33.900967 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:04:33.900973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:04:33.900980 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:04:33.900987 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:04:33.900993 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:04:33.901000 kernel: arm-pv: using stolen time PV Feb 13 19:04:33.901007 kernel: Console: colour dummy device 80x25 Feb 13 19:04:33.901013 kernel: ACPI: Core revision 20230628 Feb 13 19:04:33.901021 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:04:33.901027 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:04:33.901036 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:04:33.901042 kernel: landlock: Up and running. Feb 13 19:04:33.901049 kernel: SELinux: Initializing. Feb 13 19:04:33.901056 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:04:33.901063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:04:33.901069 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:04:33.901141 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:04:33.901151 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:04:33.901158 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:04:33.901168 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:04:33.901174 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:04:33.901181 kernel: Remapping and enabling EFI services. Feb 13 19:04:33.901188 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:04:33.901195 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:04:33.901202 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:04:33.901209 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:04:33.901215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:04:33.901222 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:04:33.901229 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:04:33.901237 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:04:33.901244 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:04:33.901256 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:04:33.901265 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:04:33.901272 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:04:33.901279 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:04:33.901286 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:04:33.901294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:04:33.901301 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:04:33.901309 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:04:33.901316 kernel: SMP: Total of 4 processors activated. Feb 13 19:04:33.901323 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:04:33.901330 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:04:33.901337 kernel: CPU features: detected: Common not Private translations Feb 13 19:04:33.901344 kernel: CPU features: detected: CRC32 instructions Feb 13 19:04:33.901352 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:04:33.901359 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:04:33.901368 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:04:33.901375 kernel: CPU features: detected: Privileged Access Never Feb 13 19:04:33.901382 kernel: CPU features: detected: RAS Extension Support Feb 13 19:04:33.901389 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:04:33.901396 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:04:33.901404 kernel: alternatives: applying system-wide alternatives Feb 13 19:04:33.901411 kernel: devtmpfs: initialized Feb 13 19:04:33.901418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:04:33.901425 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:04:33.901434 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:04:33.901441 kernel: SMBIOS 3.0.0 present. Feb 13 19:04:33.901448 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:04:33.901455 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:04:33.901462 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:04:33.901469 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:04:33.901476 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:04:33.901484 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:04:33.901491 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:04:33.901499 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:04:33.901506 kernel: cpuidle: using governor menu Feb 13 19:04:33.901514 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:04:33.901521 kernel: ASID allocator initialised with 32768 entries Feb 13 19:04:33.901528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:04:33.901535 kernel: Serial: AMBA PL011 UART driver Feb 13 19:04:33.901542 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:04:33.901549 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:04:33.901556 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:04:33.901565 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:04:33.901572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:04:33.901579 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:04:33.901587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:04:33.901594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:04:33.901601 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:04:33.901608 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:04:33.901615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:04:33.901622 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:04:33.901630 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:04:33.901637 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:04:33.901645 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:04:33.901652 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:04:33.901659 kernel: ACPI: Interpreter enabled Feb 13 19:04:33.901666 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:04:33.901680 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:04:33.901689 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:04:33.901696 kernel: printk: console [ttyAMA0] enabled Feb 13 19:04:33.901705 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:04:33.901850 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:04:33.901924 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:04:33.901989 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:04:33.902052 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:04:33.902114 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:04:33.902123 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:04:33.902133 kernel: PCI host bridge to bus 0000:00 Feb 13 19:04:33.902201 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:04:33.902259 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:04:33.902316 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:04:33.902373 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:04:33.902467 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:04:33.902545 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:04:33.902614 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:04:33.902691 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:04:33.902786 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:04:33.902858 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:04:33.902923 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:04:33.902988 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:04:33.903047 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:04:33.903107 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:04:33.903164 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:04:33.903173 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:04:33.903181 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:04:33.903188 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:04:33.903195 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:04:33.903202 kernel: iommu: Default domain type: Translated Feb 13 19:04:33.903209 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:04:33.903218 kernel: efivars: Registered efivars operations Feb 13 19:04:33.903225 kernel: vgaarb: loaded Feb 13 19:04:33.903237 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:04:33.903244 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:04:33.903251 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:04:33.903258 kernel: pnp: PnP ACPI init Feb 13 19:04:33.903332 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:04:33.903342 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:04:33.903351 kernel: NET: Registered PF_INET protocol family Feb 13 19:04:33.903359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:04:33.903367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:04:33.903374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:04:33.903381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:04:33.903388 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:04:33.903395 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:04:33.903403 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:04:33.903410 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:04:33.903419 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:04:33.903426 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:04:33.903433 kernel: kvm [1]: HYP mode not available Feb 13 19:04:33.903440 kernel: Initialise system trusted keyrings Feb 13 19:04:33.903448 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:04:33.903455 kernel: Key type asymmetric registered Feb 13 19:04:33.903461 kernel: Asymmetric key parser 'x509' registered Feb 13 19:04:33.903468 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:04:33.903475 kernel: io scheduler mq-deadline registered Feb 13 19:04:33.903484 kernel: io scheduler kyber registered Feb 13 19:04:33.903491 kernel: io scheduler bfq registered Feb 13 19:04:33.903498 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:04:33.903506 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:04:33.903513 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:04:33.903578 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:04:33.903587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:04:33.903595 kernel: thunder_xcv, ver 1.0 Feb 13 19:04:33.903602 kernel: thunder_bgx, ver 1.0 Feb 13 19:04:33.903611 kernel: nicpf, ver 1.0 Feb 13 19:04:33.903618 kernel: nicvf, ver 1.0 Feb 13 19:04:33.903702 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:04:33.903797 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:04:33 UTC (1739473473) Feb 13 19:04:33.903809 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:04:33.903817 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:04:33.903824 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:04:33.903833 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:04:33.903850 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:04:33.903857 kernel: Segment Routing with IPv6 Feb 13 19:04:33.903865 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:04:33.903872 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:04:33.903884 kernel: Key type dns_resolver registered Feb 13 19:04:33.903891 kernel: registered taskstats version 1 Feb 13 19:04:33.903898 kernel: Loading compiled-in X.509 certificates Feb 13 19:04:33.903905 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:04:33.903912 kernel: Key type .fscrypt registered Feb 13 19:04:33.903921 kernel: Key type fscrypt-provisioning registered Feb 13 19:04:33.903928 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:04:33.903936 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:04:33.903943 kernel: ima: No architecture policies found Feb 13 19:04:33.903950 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:04:33.903957 kernel: clk: Disabling unused clocks Feb 13 19:04:33.903964 kernel: Freeing unused kernel memory: 39680K Feb 13 19:04:33.903971 kernel: Run /init as init process Feb 13 19:04:33.903978 kernel: with arguments: Feb 13 19:04:33.903986 kernel: /init Feb 13 19:04:33.903993 kernel: with environment: Feb 13 19:04:33.904000 kernel: HOME=/ Feb 13 19:04:33.904007 kernel: TERM=linux Feb 13 19:04:33.904014 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:04:33.904023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:04:33.904032 systemd[1]: Detected virtualization kvm. Feb 13 19:04:33.904039 systemd[1]: Detected architecture arm64. Feb 13 19:04:33.904048 systemd[1]: Running in initrd. Feb 13 19:04:33.904055 systemd[1]: No hostname configured, using default hostname. Feb 13 19:04:33.904063 systemd[1]: Hostname set to . Feb 13 19:04:33.904070 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:04:33.904078 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:04:33.904085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:04:33.904093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:04:33.904101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:04:33.904110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:04:33.904118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:04:33.904126 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:04:33.904135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:04:33.904143 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:04:33.904151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:04:33.904158 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:04:33.904167 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:04:33.904175 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:04:33.904182 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:04:33.904190 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:04:33.904198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:04:33.904205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:04:33.904213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:04:33.904220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:04:33.904230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:04:33.904237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:04:33.904245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:04:33.904253 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:04:33.904260 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:04:33.904268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:04:33.904275 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:04:33.904283 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:04:33.904290 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:04:33.904299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:04:33.904307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:33.904315 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:04:33.904323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:04:33.904330 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:04:33.904354 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:04:33.904374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:04:33.904382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:33.904391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:04:33.904399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:04:33.904408 systemd-journald[238]: Journal started Feb 13 19:04:33.904426 systemd-journald[238]: Runtime Journal (/run/log/journal/366b3d8934174020852ecfe13e6dc62e) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:04:33.892597 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:04:33.907902 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:04:33.907929 kernel: Bridge firewalling registered Feb 13 19:04:33.907871 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:04:33.909942 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:04:33.910855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:04:33.916656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:04:33.918023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:04:33.920244 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:04:33.928111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:04:33.929667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:33.931823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:04:33.932893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:33.941965 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:04:33.943843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:04:33.951358 dracut-cmdline[279]: dracut-dracut-053 Feb 13 19:04:33.953708 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:04:33.969228 systemd-resolved[280]: Positive Trust Anchors: Feb 13 19:04:33.969304 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:04:33.969336 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:04:33.973841 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 19:04:33.974782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:04:33.976720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:04:34.024766 kernel: SCSI subsystem initialized Feb 13 19:04:34.027770 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:04:34.036767 kernel: iscsi: registered transport (tcp) Feb 13 19:04:34.051784 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:04:34.051811 kernel: QLogic iSCSI HBA Driver Feb 13 19:04:34.093364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:04:34.101963 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:04:34.117804 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:04:34.117862 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:04:34.118991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:04:34.165780 kernel: raid6: neonx8 gen() 15735 MB/s Feb 13 19:04:34.182767 kernel: raid6: neonx4 gen() 15626 MB/s Feb 13 19:04:34.199768 kernel: raid6: neonx2 gen() 13333 MB/s Feb 13 19:04:34.216772 kernel: raid6: neonx1 gen() 10470 MB/s Feb 13 19:04:34.233785 kernel: raid6: int64x8 gen() 6962 MB/s Feb 13 19:04:34.250773 kernel: raid6: int64x4 gen() 7340 MB/s Feb 13 19:04:34.267770 kernel: raid6: int64x2 gen() 6125 MB/s Feb 13 19:04:34.284770 kernel: raid6: int64x1 gen() 5056 MB/s Feb 13 19:04:34.284794 kernel: raid6: using algorithm neonx8 gen() 15735 MB/s Feb 13 19:04:34.301775 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Feb 13 19:04:34.301789 kernel: raid6: using neon recovery algorithm Feb 13 19:04:34.307034 kernel: xor: measuring software checksum speed Feb 13 19:04:34.307051 kernel: 8regs : 19773 MB/sec Feb 13 19:04:34.308058 kernel: 32regs : 19669 MB/sec Feb 13 19:04:34.308072 kernel: arm64_neon : 26289 MB/sec Feb 13 19:04:34.308081 kernel: xor: using function: arm64_neon (26289 MB/sec) Feb 13 19:04:34.359783 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:04:34.371367 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:04:34.380903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:04:34.395323 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 19:04:34.398468 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:04:34.401647 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:04:34.416906 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 19:04:34.445772 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:04:34.452909 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:04:34.498783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:04:34.506226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:04:34.517085 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:04:34.518685 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:04:34.520318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:04:34.523333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:04:34.533033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:04:34.541800 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:04:34.562056 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:04:34.562162 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:04:34.562173 kernel: GPT:9289727 != 19775487 Feb 13 19:04:34.562182 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:04:34.562191 kernel: GPT:9289727 != 19775487 Feb 13 19:04:34.562201 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:04:34.562210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:04:34.548312 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:04:34.565958 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:04:34.566071 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:34.568652 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:04:34.569536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:04:34.569727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:34.571517 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:34.588044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:34.593783 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522) Feb 13 19:04:34.598806 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (507) Feb 13 19:04:34.601173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:34.606451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:04:34.610651 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:04:34.614906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:04:34.618350 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:04:34.619244 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:04:34.633923 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:04:34.635934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:04:34.640594 disk-uuid[552]: Primary Header is updated. Feb 13 19:04:34.640594 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:04:34.640594 disk-uuid[552]: Secondary Header is updated. Feb 13 19:04:34.646781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:04:34.656028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:35.653770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:04:35.656147 disk-uuid[553]: The operation has completed successfully. Feb 13 19:04:35.675918 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:04:35.676011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:04:35.693975 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:04:35.696803 sh[573]: Success Feb 13 19:04:35.712793 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:04:35.745302 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:04:35.758191 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:04:35.760167 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:04:35.772085 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:04:35.772133 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:35.772144 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:04:35.772155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:04:35.772762 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:04:35.776874 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:04:35.778693 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:04:35.784920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:04:35.786314 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:04:35.794332 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:35.794375 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:35.794387 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:04:35.797779 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:04:35.804909 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:04:35.805791 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:35.811289 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:04:35.817902 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:04:35.885799 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:04:35.895912 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:04:35.917654 ignition[666]: Ignition 2.20.0 Feb 13 19:04:35.917664 ignition[666]: Stage: fetch-offline Feb 13 19:04:35.917710 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:35.917729 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:04:35.919625 systemd-networkd[767]: lo: Link UP Feb 13 19:04:35.917898 ignition[666]: parsed url from cmdline: "" Feb 13 19:04:35.919628 systemd-networkd[767]: lo: Gained carrier Feb 13 19:04:35.917902 ignition[666]: no config URL provided Feb 13 19:04:35.920352 systemd-networkd[767]: Enumeration completed Feb 13 19:04:35.917906 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:04:35.920762 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:35.917914 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:04:35.920765 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:04:35.917938 ignition[666]: op(1): [started] loading QEMU firmware config module Feb 13 19:04:35.920971 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:04:35.917942 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:04:35.921458 systemd-networkd[767]: eth0: Link UP Feb 13 19:04:35.932365 ignition[666]: op(1): [finished] loading QEMU firmware config module Feb 13 19:04:35.921461 systemd-networkd[767]: eth0: Gained carrier Feb 13 19:04:35.921467 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:35.922665 systemd[1]: Reached target network.target - Network. Feb 13 19:04:35.945799 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:04:35.977062 ignition[666]: parsing config with SHA512: 474d7fa37ecde08ad8944b1ae3c03e6fabc04c1eb289e5a7fdfec4b7026f3ba117dfc48355f2381230f27e53e6326f5819ed7e706cdc8fd7d00b0317af467054 Feb 13 19:04:35.983804 unknown[666]: fetched base config from "system" Feb 13 19:04:35.983815 unknown[666]: fetched user config from "qemu" Feb 13 19:04:35.984303 ignition[666]: fetch-offline: fetch-offline passed Feb 13 19:04:35.986057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:04:35.984375 ignition[666]: Ignition finished successfully Feb 13 19:04:35.987079 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:04:35.999921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:04:36.011467 ignition[774]: Ignition 2.20.0 Feb 13 19:04:36.011478 ignition[774]: Stage: kargs Feb 13 19:04:36.011644 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:36.011653 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:04:36.012619 ignition[774]: kargs: kargs passed Feb 13 19:04:36.012680 ignition[774]: Ignition finished successfully Feb 13 19:04:36.014847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:04:36.024933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:04:36.034918 ignition[783]: Ignition 2.20.0 Feb 13 19:04:36.034928 ignition[783]: Stage: disks Feb 13 19:04:36.035098 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:36.035108 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:04:36.036075 ignition[783]: disks: disks passed Feb 13 19:04:36.036134 ignition[783]: Ignition finished successfully Feb 13 19:04:36.038816 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:04:36.039929 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:04:36.040952 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:04:36.042382 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:04:36.043877 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:04:36.045310 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:04:36.056911 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:04:36.066290 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:04:36.070046 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:04:36.083872 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:04:36.126776 kernel: EXT4-fs (vda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:04:36.127000 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:04:36.128084 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:04:36.138832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:04:36.140697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:04:36.141588 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:04:36.141627 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:04:36.141648 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:04:36.146587 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:04:36.148084 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:04:36.152402 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 19:04:36.152429 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:36.152440 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:36.153798 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:04:36.155773 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:04:36.157649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:04:36.193820 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:04:36.197699 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:04:36.200826 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:04:36.203700 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:04:36.274636 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:04:36.291865 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:04:36.293276 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:04:36.298774 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:36.313901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:04:36.316172 ignition[916]: INFO : Ignition 2.20.0 Feb 13 19:04:36.316172 ignition[916]: INFO : Stage: mount Feb 13 19:04:36.317362 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:36.317362 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:04:36.317362 ignition[916]: INFO : mount: mount passed Feb 13 19:04:36.320016 ignition[916]: INFO : Ignition finished successfully Feb 13 19:04:36.318585 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:04:36.324868 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:04:36.769884 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:04:36.784006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:04:36.791250 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 19:04:36.791286 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:36.791297 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:36.791912 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:04:36.794764 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:04:36.795803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:04:36.814958 ignition[946]: INFO : Ignition 2.20.0 Feb 13 19:04:36.814958 ignition[946]: INFO : Stage: files Feb 13 19:04:36.816212 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:36.816212 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:04:36.816212 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:04:36.819189 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:04:36.819189 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:04:36.819189 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:04:36.819189 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:04:36.823001 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:04:36.823001 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:04:36.823001 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:04:36.823001 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:04:36.823001 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:04:36.819449 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 19:04:37.090224 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:04:37.264980 systemd-networkd[767]: eth0: Gained IPv6LL Feb 13 19:04:37.393856 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:04:37.393856 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:04:37.397111 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:04:37.619014 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 19:04:37.685050 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:37.686655 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:04:37.851455 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Feb 13 19:04:38.043732 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:38.043732 ignition[946]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 13 19:04:38.046786 ignition[946]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:04:38.069411 ignition[946]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:04:38.072948 ignition[946]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:04:38.074221 ignition[946]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:04:38.074221 ignition[946]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:04:38.074221 ignition[946]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:04:38.074221 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:04:38.074221 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:04:38.074221 ignition[946]: INFO : files: files passed Feb 13 19:04:38.074221 ignition[946]: INFO : Ignition finished successfully Feb 13 19:04:38.075684 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:04:38.082892 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:04:38.085621 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:04:38.088356 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:04:38.088452 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:04:38.092063 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:04:38.095087 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:04:38.096327 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:04:38.097485 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:04:38.097225 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:04:38.098538 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:04:38.103877 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:04:38.120962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:04:38.121075 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:04:38.122765 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:04:38.124939 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:04:38.125714 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:04:38.126408 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:04:38.140591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:04:38.150907 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:04:38.159616 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:04:38.160594 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:04:38.162206 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:04:38.163512 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:04:38.163633 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:04:38.165518 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:04:38.167006 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:04:38.168223 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:04:38.169500 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:04:38.170913 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:04:38.172384 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:04:38.173718 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:04:38.175180 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:04:38.176573 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:04:38.177845 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:04:38.178990 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:04:38.179107 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:04:38.180844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:04:38.182320 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:04:38.183868 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:04:38.184824 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:04:38.186337 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:04:38.186449 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:04:38.188839 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:04:38.188948 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:04:38.190534 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:04:38.191813 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:04:38.191904 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:04:38.193523 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:04:38.194839 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:04:38.196302 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:04:38.196390 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:04:38.198058 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:04:38.198136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:04:38.199401 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:04:38.199503 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:04:38.200879 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:04:38.200976 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:04:38.211910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:04:38.212647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:04:38.212795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:04:38.215383 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:04:38.216429 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:04:38.216544 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:04:38.218098 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:04:38.218224 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:04:38.224295 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:04:38.225152 ignition[1000]: INFO : Ignition 2.20.0 Feb 13 19:04:38.225152 ignition[1000]: INFO : Stage: umount Feb 13 19:04:38.225152 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:38.225152 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:04:38.229300 ignition[1000]: INFO : umount: umount passed Feb 13 19:04:38.229300 ignition[1000]: INFO : Ignition finished successfully Feb 13 19:04:38.225786 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:04:38.227910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:04:38.228406 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:04:38.229797 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:04:38.231231 systemd[1]: Stopped target network.target - Network. Feb 13 19:04:38.232137 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:04:38.232224 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:04:38.233771 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:04:38.233822 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:04:38.235213 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:04:38.235254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:04:38.236564 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:04:38.236603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:04:38.238224 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:04:38.239571 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:04:38.243219 systemd-networkd[767]: eth0: DHCPv6 lease lost Feb 13 19:04:38.243334 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:04:38.243457 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:04:38.245943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:04:38.245997 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:04:38.247515 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:04:38.247613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:04:38.249265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:04:38.249320 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:04:38.261867 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:04:38.262554 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:04:38.262617 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:04:38.264142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:04:38.264184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:38.265568 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:04:38.265610 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:04:38.267317 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:04:38.275591 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:04:38.275721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:04:38.278150 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:04:38.278273 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:04:38.279928 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:04:38.280000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:04:38.280997 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:04:38.281031 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:04:38.282467 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:04:38.282512 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:04:38.285087 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:04:38.285135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:04:38.287307 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:04:38.287377 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:38.302969 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:04:38.303771 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:04:38.303831 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:04:38.305606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:04:38.305651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:38.310712 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:04:38.310814 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:04:38.312187 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:04:38.312262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:04:38.315187 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:04:38.316412 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:04:38.316474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:04:38.318839 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:04:38.328863 systemd[1]: Switching root. Feb 13 19:04:38.362411 systemd-journald[238]: Journal stopped Feb 13 19:04:39.192856 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:04:39.192918 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:04:39.192932 kernel: SELinux: policy capability open_perms=1 Feb 13 19:04:39.192942 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:04:39.192955 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:04:39.192964 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:04:39.192974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:04:39.193026 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:04:39.193040 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:04:39.193050 kernel: audit: type=1403 audit(1739473478.569:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:04:39.193062 systemd[1]: Successfully loaded SELinux policy in 36.170ms. Feb 13 19:04:39.193083 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.756ms. Feb 13 19:04:39.193096 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:04:39.193107 systemd[1]: Detected virtualization kvm. Feb 13 19:04:39.193119 systemd[1]: Detected architecture arm64. Feb 13 19:04:39.193129 systemd[1]: Detected first boot. Feb 13 19:04:39.193139 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:04:39.193150 zram_generator::config[1064]: No configuration found. Feb 13 19:04:39.193161 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:04:39.193204 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:04:39.193221 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:04:39.193236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:04:39.193247 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:04:39.193258 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:04:39.193271 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:04:39.193282 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:04:39.193293 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:04:39.193304 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:04:39.193314 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:04:39.193327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:04:39.193338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:04:39.193349 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:04:39.193392 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:04:39.193411 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:04:39.193423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:04:39.193433 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:04:39.193444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:04:39.193454 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:04:39.193467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:04:39.193478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:04:39.193488 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:04:39.193499 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:04:39.193509 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:04:39.193520 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:04:39.193531 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:04:39.193577 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:04:39.193597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:04:39.193608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:04:39.193619 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:04:39.193629 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:04:39.193640 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:04:39.193651 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:04:39.193669 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:04:39.193682 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:04:39.193693 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:04:39.193707 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:04:39.193717 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:04:39.193776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:04:39.193790 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:04:39.193801 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:04:39.193811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:04:39.193821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:04:39.193832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:04:39.193842 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:04:39.193856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:04:39.193867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:04:39.193880 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:04:39.193891 kernel: fuse: init (API version 7.39) Feb 13 19:04:39.193901 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:04:39.193911 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:04:39.193961 kernel: ACPI: bus type drm_connector registered Feb 13 19:04:39.193977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:04:39.193990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:04:39.194031 systemd-journald[1140]: Collecting audit messages is disabled. Feb 13 19:04:39.194059 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:04:39.194071 systemd-journald[1140]: Journal started Feb 13 19:04:39.194093 systemd-journald[1140]: Runtime Journal (/run/log/journal/366b3d8934174020852ecfe13e6dc62e) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:04:39.198782 kernel: loop: module loaded Feb 13 19:04:39.203892 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:04:39.205851 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:04:39.206823 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:04:39.207731 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:04:39.208851 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:04:39.209656 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:04:39.210657 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:04:39.211674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:04:39.212845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:04:39.214049 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:04:39.214234 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:04:39.215729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:04:39.215936 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:04:39.217193 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:04:39.217356 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:04:39.218589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:04:39.218777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:04:39.220042 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:04:39.221486 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:04:39.221652 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:04:39.222912 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:04:39.223132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:04:39.224367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:04:39.225980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:04:39.227416 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:04:39.239087 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:04:39.245897 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:04:39.250892 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:04:39.251906 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:04:39.256045 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:04:39.259926 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:04:39.260908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:04:39.262404 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:04:39.263403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:04:39.266064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:04:39.268924 systemd-journald[1140]: Time spent on flushing to /var/log/journal/366b3d8934174020852ecfe13e6dc62e is 17.608ms for 847 entries. Feb 13 19:04:39.268924 systemd-journald[1140]: System Journal (/var/log/journal/366b3d8934174020852ecfe13e6dc62e) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:04:39.300506 systemd-journald[1140]: Received client request to flush runtime journal. Feb 13 19:04:39.270118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:04:39.274734 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:04:39.276148 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:04:39.278351 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:04:39.281633 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:04:39.283593 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:04:39.296078 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:04:39.300452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:39.302800 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:04:39.307623 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:04:39.308100 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 19:04:39.308123 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 19:04:39.314851 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:04:39.327023 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:04:39.350147 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:04:39.358089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:04:39.371050 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Feb 13 19:04:39.371072 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Feb 13 19:04:39.375162 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:04:39.711977 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:04:39.723065 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:04:39.747207 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Feb 13 19:04:39.762247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:04:39.779100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:04:39.789114 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 19:04:39.802025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1228) Feb 13 19:04:39.842941 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:04:39.852749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:04:39.880809 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:04:39.917914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:39.925340 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:04:39.929265 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:04:39.965915 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:04:39.969977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:39.970053 systemd-networkd[1230]: lo: Link UP Feb 13 19:04:39.970057 systemd-networkd[1230]: lo: Gained carrier Feb 13 19:04:39.970959 systemd-networkd[1230]: Enumeration completed Feb 13 19:04:39.971184 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:04:39.971525 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:39.971528 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:04:39.972374 systemd-networkd[1230]: eth0: Link UP Feb 13 19:04:39.972377 systemd-networkd[1230]: eth0: Gained carrier Feb 13 19:04:39.972391 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:39.977933 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:04:39.979810 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:04:39.993113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:04:39.994427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:04:40.005965 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:04:40.011519 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:04:40.039648 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:04:40.041078 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:04:40.042105 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:04:40.042161 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:04:40.042987 systemd[1]: Reached target machines.target - Containers. Feb 13 19:04:40.045542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:04:40.060940 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:04:40.063436 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:04:40.064394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:04:40.065542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:04:40.067778 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:04:40.072325 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:04:40.074104 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:04:40.084713 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 19:04:40.087541 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:04:40.094221 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:04:40.103111 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:04:40.104690 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:04:40.117783 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 19:04:40.168791 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 19:04:40.212794 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 19:04:40.219801 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 19:04:40.225787 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 19:04:40.228909 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:04:40.229326 (sd-merge)[1291]: Merged extensions into '/usr'. Feb 13 19:04:40.234853 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:04:40.234875 systemd[1]: Reloading... Feb 13 19:04:40.282783 zram_generator::config[1319]: No configuration found. Feb 13 19:04:40.344839 ldconfig[1273]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:04:40.398170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:40.442295 systemd[1]: Reloading finished in 206 ms. Feb 13 19:04:40.460713 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:04:40.463127 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:04:40.479942 systemd[1]: Starting ensure-sysext.service... Feb 13 19:04:40.482041 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:04:40.487324 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:04:40.487345 systemd[1]: Reloading... Feb 13 19:04:40.499896 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:04:40.500154 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:04:40.500833 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:04:40.501048 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Feb 13 19:04:40.501101 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Feb 13 19:04:40.503644 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:04:40.503666 systemd-tmpfiles[1361]: Skipping /boot Feb 13 19:04:40.510876 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:04:40.510893 systemd-tmpfiles[1361]: Skipping /boot Feb 13 19:04:40.539879 zram_generator::config[1397]: No configuration found. Feb 13 19:04:40.630826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:40.673579 systemd[1]: Reloading finished in 185 ms. Feb 13 19:04:40.689830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:04:40.712427 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:04:40.714804 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:04:40.716909 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:04:40.719892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:04:40.724261 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:04:40.732269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:04:40.736163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:04:40.741984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:04:40.744286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:04:40.745404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:04:40.746347 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:04:40.747860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:04:40.748015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:04:40.752954 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:04:40.753146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:04:40.754750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:04:40.755039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:04:40.762059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:04:40.762264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:04:40.777160 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:04:40.778985 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:04:40.784263 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:04:40.787576 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:04:40.789592 augenrules[1475]: No rules Feb 13 19:04:40.790015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:04:40.815127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:04:40.817333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:04:40.821064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:04:40.822272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:04:40.822553 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:04:40.823706 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:04:40.824098 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:04:40.825898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:04:40.826175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:04:40.826432 systemd-resolved[1435]: Positive Trust Anchors: Feb 13 19:04:40.826590 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:04:40.826621 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:04:40.828203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:04:40.828353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:04:40.830001 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:04:40.830212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:04:40.838036 systemd-resolved[1435]: Defaulting to hostname 'linux'. Feb 13 19:04:40.849048 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:04:40.849926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:04:40.851396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:04:40.853540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:04:40.858464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:04:40.865077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:04:40.866010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:04:40.866166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:04:40.866925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:04:40.869020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:04:40.869214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:04:40.870256 augenrules[1493]: /sbin/augenrules: No change Feb 13 19:04:40.870631 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:04:40.870816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:04:40.872146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:04:40.872308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:04:40.874093 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:04:40.874342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:04:40.880455 systemd[1]: Finished ensure-sysext.service. Feb 13 19:04:40.881405 augenrules[1519]: No rules Feb 13 19:04:40.883298 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:04:40.883575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:04:40.884812 systemd[1]: Reached target network.target - Network. Feb 13 19:04:40.885604 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:04:40.886931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:04:40.887028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:04:40.901031 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:04:40.950414 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:04:40.951260 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:04:40.951311 systemd-timesyncd[1532]: Initial clock synchronization to Thu 2025-02-13 19:04:41.085056 UTC. Feb 13 19:04:40.951896 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:04:40.952918 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:04:40.953929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:04:40.954856 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:04:40.955806 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:04:40.955840 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:04:40.956494 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:04:40.957446 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:04:40.958399 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:04:40.959370 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:04:40.960861 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:04:40.963419 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:04:40.965578 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:04:40.974894 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:04:40.975782 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:04:40.976504 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:04:40.977361 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:04:40.977415 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:04:40.977436 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:04:40.978857 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:04:40.980829 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:04:40.982683 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:04:40.986915 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:04:40.987808 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:04:40.989051 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:04:40.996869 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:04:40.999926 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:04:41.000770 jq[1538]: false Feb 13 19:04:41.006238 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:04:41.009978 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:04:41.011733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:04:41.013710 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:04:41.019946 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:04:41.022625 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:04:41.022905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:04:41.026158 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:04:41.026385 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:04:41.034182 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:04:41.037048 jq[1556]: true Feb 13 19:04:41.034452 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:04:41.041674 extend-filesystems[1540]: Found loop3 Feb 13 19:04:41.040713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:04:41.040320 dbus-daemon[1537]: [system] SELinux support is enabled Feb 13 19:04:41.044170 extend-filesystems[1540]: Found loop4 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found loop5 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda1 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda2 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda3 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found usr Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda4 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda6 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda7 Feb 13 19:04:41.044170 extend-filesystems[1540]: Found vda9 Feb 13 19:04:41.044170 extend-filesystems[1540]: Checking size of /dev/vda9 Feb 13 19:04:41.042700 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:04:41.077691 tar[1559]: linux-arm64/helm Feb 13 19:04:41.057937 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:04:41.057979 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:04:41.063826 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:04:41.063861 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:04:41.087703 jq[1574]: true Feb 13 19:04:41.091531 extend-filesystems[1540]: Resized partition /dev/vda9 Feb 13 19:04:41.100851 extend-filesystems[1582]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:04:41.102938 update_engine[1552]: I20250213 19:04:41.102663 1552 main.cc:92] Flatcar Update Engine starting Feb 13 19:04:41.103842 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:04:41.119717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1242) Feb 13 19:04:41.109528 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:04:41.119906 update_engine[1552]: I20250213 19:04:41.110167 1552 update_check_scheduler.cc:74] Next update check in 7m48s Feb 13 19:04:41.113888 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:04:41.126981 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:04:41.136791 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:04:41.161193 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:04:41.162097 systemd-logind[1550]: New seat seat0. Feb 13 19:04:41.162405 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:04:41.162405 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:04:41.162405 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:04:41.172841 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Feb 13 19:04:41.166031 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:04:41.166326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:04:41.171243 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:04:41.207637 bash[1603]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:04:41.209508 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:04:41.214442 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:04:41.223612 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:04:41.308699 containerd[1566]: time="2025-02-13T19:04:41.308292644Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:04:41.339009 containerd[1566]: time="2025-02-13T19:04:41.338955153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.340699 containerd[1566]: time="2025-02-13T19:04:41.340498232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:04:41.340699 containerd[1566]: time="2025-02-13T19:04:41.340534835Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:04:41.340699 containerd[1566]: time="2025-02-13T19:04:41.340554275Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:04:41.340791 containerd[1566]: time="2025-02-13T19:04:41.340714716Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:04:41.340791 containerd[1566]: time="2025-02-13T19:04:41.340731756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.340847 containerd[1566]: time="2025-02-13T19:04:41.340808337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:04:41.340847 containerd[1566]: time="2025-02-13T19:04:41.340821839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341066 containerd[1566]: time="2025-02-13T19:04:41.341019289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341066 containerd[1566]: time="2025-02-13T19:04:41.341044097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341066 containerd[1566]: time="2025-02-13T19:04:41.341058128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341066 containerd[1566]: time="2025-02-13T19:04:41.341068580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341154 containerd[1566]: time="2025-02-13T19:04:41.341141257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341405 containerd[1566]: time="2025-02-13T19:04:41.341370388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341530 containerd[1566]: time="2025-02-13T19:04:41.341510535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:04:41.341554 containerd[1566]: time="2025-02-13T19:04:41.341530504Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:04:41.341623 containerd[1566]: time="2025-02-13T19:04:41.341608508Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:04:41.341668 containerd[1566]: time="2025-02-13T19:04:41.341655400Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:04:41.345053 containerd[1566]: time="2025-02-13T19:04:41.344980532Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:04:41.345053 containerd[1566]: time="2025-02-13T19:04:41.345035761Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:04:41.345053 containerd[1566]: time="2025-02-13T19:04:41.345050930Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:04:41.345180 containerd[1566]: time="2025-02-13T19:04:41.345065287Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:04:41.345180 containerd[1566]: time="2025-02-13T19:04:41.345081066Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:04:41.345261 containerd[1566]: time="2025-02-13T19:04:41.345223165Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:04:41.345561 containerd[1566]: time="2025-02-13T19:04:41.345542461Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:04:41.345672 containerd[1566]: time="2025-02-13T19:04:41.345655115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:04:41.345702 containerd[1566]: time="2025-02-13T19:04:41.345677484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:04:41.345702 containerd[1566]: time="2025-02-13T19:04:41.345693223Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:04:41.345737 containerd[1566]: time="2025-02-13T19:04:41.345708067Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345737 containerd[1566]: time="2025-02-13T19:04:41.345721000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345737 containerd[1566]: time="2025-02-13T19:04:41.345733485Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345819 containerd[1566]: time="2025-02-13T19:04:41.345747842Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345819 containerd[1566]: time="2025-02-13T19:04:41.345762605Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345819 containerd[1566]: time="2025-02-13T19:04:41.345799858Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345819 containerd[1566]: time="2025-02-13T19:04:41.345813523Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345893 containerd[1566]: time="2025-02-13T19:04:41.345825520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:04:41.345893 containerd[1566]: time="2025-02-13T19:04:41.345845367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345893 containerd[1566]: time="2025-02-13T19:04:41.345858259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345893 containerd[1566]: time="2025-02-13T19:04:41.345870135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345893 containerd[1566]: time="2025-02-13T19:04:41.345882091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345893 containerd[1566]: time="2025-02-13T19:04:41.345893926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345907713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345919426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345931790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345944763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345959282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345969978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345983236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.345994 containerd[1566]: time="2025-02-13T19:04:41.345996413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.346127 containerd[1566]: time="2025-02-13T19:04:41.346011664Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:04:41.346127 containerd[1566]: time="2025-02-13T19:04:41.346033341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.346127 containerd[1566]: time="2025-02-13T19:04:41.346047819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.346127 containerd[1566]: time="2025-02-13T19:04:41.346059044Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:04:41.346278 containerd[1566]: time="2025-02-13T19:04:41.346261537Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:04:41.346302 containerd[1566]: time="2025-02-13T19:04:41.346283295Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:04:41.346302 containerd[1566]: time="2025-02-13T19:04:41.346298018Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:04:41.346353 containerd[1566]: time="2025-02-13T19:04:41.346310381Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:04:41.346353 containerd[1566]: time="2025-02-13T19:04:41.346320061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.346353 containerd[1566]: time="2025-02-13T19:04:41.346331855Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:04:41.346353 containerd[1566]: time="2025-02-13T19:04:41.346342103Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:04:41.346420 containerd[1566]: time="2025-02-13T19:04:41.346360771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:04:41.346760 containerd[1566]: time="2025-02-13T19:04:41.346701824Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:04:41.346760 containerd[1566]: time="2025-02-13T19:04:41.346755711Z" level=info msg="Connect containerd service" Feb 13 19:04:41.346908 containerd[1566]: time="2025-02-13T19:04:41.346803091Z" level=info msg="using legacy CRI server" Feb 13 19:04:41.346908 containerd[1566]: time="2025-02-13T19:04:41.346811795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:04:41.347058 containerd[1566]: time="2025-02-13T19:04:41.347039462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:04:41.347729 containerd[1566]: time="2025-02-13T19:04:41.347701642Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.347936101Z" level=info msg="Start subscribing containerd event" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348002961Z" level=info msg="Start recovering state" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348067463Z" level=info msg="Start event monitor" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348080152Z" level=info msg="Start snapshots syncer" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348090848Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348099470Z" level=info msg="Start streaming server" Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348616582Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348663270Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:04:41.349515 containerd[1566]: time="2025-02-13T19:04:41.348715042Z" level=info msg="containerd successfully booted in 0.041297s" Feb 13 19:04:41.348911 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:04:41.488884 systemd-networkd[1230]: eth0: Gained IPv6LL Feb 13 19:04:41.491467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:04:41.494398 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:04:41.502152 tar[1559]: linux-arm64/LICENSE Feb 13 19:04:41.502212 tar[1559]: linux-arm64/README.md Feb 13 19:04:41.505207 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:04:41.510972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:41.513067 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:04:41.522409 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:04:41.535097 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:04:41.540502 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:04:41.541222 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:04:41.543117 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:04:41.548108 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:04:41.558225 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:04:41.568040 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:04:41.574938 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:04:41.575198 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:04:41.578514 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:04:41.591653 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:04:41.605073 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:04:41.607396 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:04:41.608521 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:04:42.052553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:42.053998 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:04:42.054994 systemd[1]: Startup finished in 5.395s (kernel) + 3.521s (userspace) = 8.917s. Feb 13 19:04:42.056954 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:04:42.576872 kubelet[1673]: E0213 19:04:42.576760 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:04:42.578692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:04:42.578891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:04:46.437456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:04:46.451047 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:56748.service - OpenSSH per-connection server daemon (10.0.0.1:56748). Feb 13 19:04:46.534114 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 56748 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:46.535927 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:46.557872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:04:46.566991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:04:46.568603 systemd-logind[1550]: New session 1 of user core. Feb 13 19:04:46.578171 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:04:46.580743 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:04:46.587229 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:04:46.660907 systemd[1694]: Queued start job for default target default.target. Feb 13 19:04:46.661325 systemd[1694]: Created slice app.slice - User Application Slice. Feb 13 19:04:46.661351 systemd[1694]: Reached target paths.target - Paths. Feb 13 19:04:46.661361 systemd[1694]: Reached target timers.target - Timers. Feb 13 19:04:46.674883 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:04:46.681108 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:04:46.681179 systemd[1694]: Reached target sockets.target - Sockets. Feb 13 19:04:46.681201 systemd[1694]: Reached target basic.target - Basic System. Feb 13 19:04:46.681243 systemd[1694]: Reached target default.target - Main User Target. Feb 13 19:04:46.681269 systemd[1694]: Startup finished in 88ms. Feb 13 19:04:46.681515 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:04:46.683161 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:04:46.743112 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:56758.service - OpenSSH per-connection server daemon (10.0.0.1:56758). Feb 13 19:04:46.785347 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 56758 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:46.786526 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:46.790735 systemd-logind[1550]: New session 2 of user core. Feb 13 19:04:46.799088 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:04:46.853803 sshd[1709]: Connection closed by 10.0.0.1 port 56758 Feb 13 19:04:46.854216 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:46.871054 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:56768.service - OpenSSH per-connection server daemon (10.0.0.1:56768). Feb 13 19:04:46.871430 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:56758.service: Deactivated successfully. Feb 13 19:04:46.873146 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:04:46.873709 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:04:46.875008 systemd-logind[1550]: Removed session 2. Feb 13 19:04:46.915257 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 56768 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:46.916390 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:46.920111 systemd-logind[1550]: New session 3 of user core. Feb 13 19:04:46.932011 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:04:46.980679 sshd[1717]: Connection closed by 10.0.0.1 port 56768 Feb 13 19:04:46.981362 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:46.992975 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:56782.service - OpenSSH per-connection server daemon (10.0.0.1:56782). Feb 13 19:04:46.993340 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:56768.service: Deactivated successfully. Feb 13 19:04:46.995277 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:04:46.995780 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:04:46.997183 systemd-logind[1550]: Removed session 3. Feb 13 19:04:47.036027 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 56782 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:47.037522 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:47.041791 systemd-logind[1550]: New session 4 of user core. Feb 13 19:04:47.053030 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:04:47.106132 sshd[1725]: Connection closed by 10.0.0.1 port 56782 Feb 13 19:04:47.106429 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:47.120042 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:56792.service - OpenSSH per-connection server daemon (10.0.0.1:56792). Feb 13 19:04:47.120421 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:56782.service: Deactivated successfully. Feb 13 19:04:47.122254 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:04:47.122862 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:04:47.123805 systemd-logind[1550]: Removed session 4. Feb 13 19:04:47.161701 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56792 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:47.162885 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:47.166932 systemd-logind[1550]: New session 5 of user core. Feb 13 19:04:47.177020 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:04:47.239470 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:04:47.239814 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:04:47.255718 sudo[1734]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:47.257754 sshd[1733]: Connection closed by 10.0.0.1 port 56792 Feb 13 19:04:47.258106 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:47.270101 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Feb 13 19:04:47.270530 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:56792.service: Deactivated successfully. Feb 13 19:04:47.272025 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:04:47.272670 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:04:47.274157 systemd-logind[1550]: Removed session 5. Feb 13 19:04:47.312220 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:47.313517 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:47.317894 systemd-logind[1550]: New session 6 of user core. Feb 13 19:04:47.332055 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:04:47.384576 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:04:47.384866 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:04:47.388352 sudo[1744]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:47.392734 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:04:47.393021 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:04:47.411035 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:04:47.433619 augenrules[1766]: No rules Feb 13 19:04:47.434836 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:04:47.435076 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:04:47.436422 sudo[1743]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:47.438275 sshd[1742]: Connection closed by 10.0.0.1 port 56796 Feb 13 19:04:47.438891 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:47.450052 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:56812.service - OpenSSH per-connection server daemon (10.0.0.1:56812). Feb 13 19:04:47.450438 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:56796.service: Deactivated successfully. Feb 13 19:04:47.452039 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:04:47.452864 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:04:47.454219 systemd-logind[1550]: Removed session 6. Feb 13 19:04:47.496204 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 56812 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:04:47.497421 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:47.501448 systemd-logind[1550]: New session 7 of user core. Feb 13 19:04:47.508047 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:04:47.559460 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:04:47.559737 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:04:47.913033 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:04:47.913237 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:04:48.158728 dockerd[1800]: time="2025-02-13T19:04:48.158379584Z" level=info msg="Starting up" Feb 13 19:04:48.413280 dockerd[1800]: time="2025-02-13T19:04:48.413032035Z" level=info msg="Loading containers: start." Feb 13 19:04:48.562798 kernel: Initializing XFRM netlink socket Feb 13 19:04:48.642205 systemd-networkd[1230]: docker0: Link UP Feb 13 19:04:48.684395 dockerd[1800]: time="2025-02-13T19:04:48.684281288Z" level=info msg="Loading containers: done." Feb 13 19:04:48.701470 dockerd[1800]: time="2025-02-13T19:04:48.701409600Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:04:48.701619 dockerd[1800]: time="2025-02-13T19:04:48.701524913Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:04:48.701652 dockerd[1800]: time="2025-02-13T19:04:48.701636441Z" level=info msg="Daemon has completed initialization" Feb 13 19:04:48.732635 dockerd[1800]: time="2025-02-13T19:04:48.732576854Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:04:48.732798 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:04:49.486629 containerd[1566]: time="2025-02-13T19:04:49.486575962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:04:50.136618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656965444.mount: Deactivated successfully. Feb 13 19:04:51.796825 containerd[1566]: time="2025-02-13T19:04:51.796300095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:51.797399 containerd[1566]: time="2025-02-13T19:04:51.797353312Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:04:51.798666 containerd[1566]: time="2025-02-13T19:04:51.798622033Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:51.801652 containerd[1566]: time="2025-02-13T19:04:51.801615395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:51.802882 containerd[1566]: time="2025-02-13T19:04:51.802842694Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.316219715s" Feb 13 19:04:51.802927 containerd[1566]: time="2025-02-13T19:04:51.802887852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:04:51.821302 containerd[1566]: time="2025-02-13T19:04:51.821243594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:04:52.829256 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:04:52.842961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:52.935944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:52.943461 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:04:53.055762 kubelet[2075]: E0213 19:04:53.055664 2075 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:04:53.059647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:04:53.059878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:04:53.407353 containerd[1566]: time="2025-02-13T19:04:53.407307430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:53.410175 containerd[1566]: time="2025-02-13T19:04:53.410101497Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:04:53.411813 containerd[1566]: time="2025-02-13T19:04:53.410963794Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:53.414641 containerd[1566]: time="2025-02-13T19:04:53.414597722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:53.416146 containerd[1566]: time="2025-02-13T19:04:53.416102136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.594787479s" Feb 13 19:04:53.416146 containerd[1566]: time="2025-02-13T19:04:53.416140064Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:04:53.435630 containerd[1566]: time="2025-02-13T19:04:53.435568895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:04:54.596079 containerd[1566]: time="2025-02-13T19:04:54.596023793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:54.597740 containerd[1566]: time="2025-02-13T19:04:54.597696552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:04:54.598882 containerd[1566]: time="2025-02-13T19:04:54.598819374Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:54.602129 containerd[1566]: time="2025-02-13T19:04:54.602097374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:54.604087 containerd[1566]: time="2025-02-13T19:04:54.603856628Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.168246277s" Feb 13 19:04:54.604087 containerd[1566]: time="2025-02-13T19:04:54.603891892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:04:54.621367 containerd[1566]: time="2025-02-13T19:04:54.621332140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:04:55.839345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603602421.mount: Deactivated successfully. Feb 13 19:04:56.156023 containerd[1566]: time="2025-02-13T19:04:56.155856224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:56.156477 containerd[1566]: time="2025-02-13T19:04:56.156409029Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:04:56.157292 containerd[1566]: time="2025-02-13T19:04:56.157254414Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:56.159391 containerd[1566]: time="2025-02-13T19:04:56.159356591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:56.160029 containerd[1566]: time="2025-02-13T19:04:56.159993305Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.538622618s" Feb 13 19:04:56.160029 containerd[1566]: time="2025-02-13T19:04:56.160027863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:04:56.178539 containerd[1566]: time="2025-02-13T19:04:56.178494831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:04:56.890831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447713750.mount: Deactivated successfully. Feb 13 19:04:57.543671 containerd[1566]: time="2025-02-13T19:04:57.543624432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:57.544351 containerd[1566]: time="2025-02-13T19:04:57.544303811Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:04:57.547873 containerd[1566]: time="2025-02-13T19:04:57.547833532Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:57.553459 containerd[1566]: time="2025-02-13T19:04:57.553393296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:57.555529 containerd[1566]: time="2025-02-13T19:04:57.555491915Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.376964933s" Feb 13 19:04:57.555580 containerd[1566]: time="2025-02-13T19:04:57.555529870Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:04:57.574346 containerd[1566]: time="2025-02-13T19:04:57.574303291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:04:58.057286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233777465.mount: Deactivated successfully. Feb 13 19:04:58.061670 containerd[1566]: time="2025-02-13T19:04:58.061623343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:58.062488 containerd[1566]: time="2025-02-13T19:04:58.062424927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:04:58.063233 containerd[1566]: time="2025-02-13T19:04:58.063172577Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:58.065982 containerd[1566]: time="2025-02-13T19:04:58.065934063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:58.067543 containerd[1566]: time="2025-02-13T19:04:58.067513149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 493.170021ms" Feb 13 19:04:58.067593 containerd[1566]: time="2025-02-13T19:04:58.067545284Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:04:58.085062 containerd[1566]: time="2025-02-13T19:04:58.085031144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:04:58.745538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510233029.mount: Deactivated successfully. Feb 13 19:05:01.364466 containerd[1566]: time="2025-02-13T19:05:01.364402359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:01.365245 containerd[1566]: time="2025-02-13T19:05:01.365199802Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:05:01.365999 containerd[1566]: time="2025-02-13T19:05:01.365963285Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:01.369190 containerd[1566]: time="2025-02-13T19:05:01.369157260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:01.370397 containerd[1566]: time="2025-02-13T19:05:01.370364137Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.2853001s" Feb 13 19:05:01.370431 containerd[1566]: time="2025-02-13T19:05:01.370398857Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:05:03.165019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:05:03.174951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:03.350013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:03.354299 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:05:03.393634 kubelet[2312]: E0213 19:05:03.393579 2312 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:05:03.395984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:05:03.396189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:05:08.089431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:08.101015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:08.117658 systemd[1]: Reloading requested from client PID 2330 ('systemctl') (unit session-7.scope)... Feb 13 19:05:08.117680 systemd[1]: Reloading... Feb 13 19:05:08.222784 zram_generator::config[2370]: No configuration found. Feb 13 19:05:08.336674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:05:08.386186 systemd[1]: Reloading finished in 268 ms. Feb 13 19:05:08.427031 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:05:08.427100 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:05:08.427376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:08.430463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:08.555360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:08.559607 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:05:08.597631 kubelet[2427]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:05:08.597631 kubelet[2427]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:05:08.597631 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:05:08.598544 kubelet[2427]: I0213 19:05:08.598491 2427 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:05:09.055903 kubelet[2427]: I0213 19:05:09.055854 2427 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:05:09.055903 kubelet[2427]: I0213 19:05:09.055883 2427 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:05:09.056215 kubelet[2427]: I0213 19:05:09.056182 2427 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:05:09.071709 kubelet[2427]: E0213 19:05:09.071649 2427 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.073308 kubelet[2427]: I0213 19:05:09.073276 2427 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:05:09.082743 kubelet[2427]: I0213 19:05:09.082710 2427 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:05:09.083891 kubelet[2427]: I0213 19:05:09.083849 2427 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:05:09.084072 kubelet[2427]: I0213 19:05:09.083892 2427 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:05:09.084159 kubelet[2427]: I0213 19:05:09.084139 2427 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:05:09.084159 kubelet[2427]: I0213 19:05:09.084149 2427 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:05:09.084442 kubelet[2427]: I0213 19:05:09.084418 2427 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:05:09.089068 kubelet[2427]: I0213 19:05:09.089046 2427 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:05:09.089110 kubelet[2427]: I0213 19:05:09.089071 2427 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:05:09.089351 kubelet[2427]: I0213 19:05:09.089334 2427 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:05:09.089427 kubelet[2427]: I0213 19:05:09.089410 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:05:09.089668 kubelet[2427]: W0213 19:05:09.089616 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.089704 kubelet[2427]: E0213 19:05:09.089690 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.094071 kubelet[2427]: W0213 19:05:09.094023 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.094117 kubelet[2427]: E0213 19:05:09.094078 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.095187 kubelet[2427]: I0213 19:05:09.095109 2427 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:05:09.095851 kubelet[2427]: I0213 19:05:09.095837 2427 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:05:09.096072 kubelet[2427]: W0213 19:05:09.096061 2427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:05:09.097667 kubelet[2427]: I0213 19:05:09.097280 2427 server.go:1264] "Started kubelet" Feb 13 19:05:09.100041 kubelet[2427]: I0213 19:05:09.098848 2427 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:05:09.100041 kubelet[2427]: I0213 19:05:09.099949 2427 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:05:09.102536 kubelet[2427]: I0213 19:05:09.102433 2427 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:05:09.102945 kubelet[2427]: I0213 19:05:09.102803 2427 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:05:09.103128 kubelet[2427]: E0213 19:05:09.099889 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9f0bdf1b996 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:05:09.097257366 +0000 UTC m=+0.534640802,LastTimestamp:2025-02-13 19:05:09.097257366 +0000 UTC m=+0.534640802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:05:09.108618 kubelet[2427]: I0213 19:05:09.108537 2427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:05:09.109466 kubelet[2427]: I0213 19:05:09.109450 2427 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:05:09.109670 kubelet[2427]: I0213 19:05:09.109657 2427 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:05:09.110560 kubelet[2427]: I0213 19:05:09.110531 2427 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:05:09.111411 kubelet[2427]: E0213 19:05:09.110768 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" Feb 13 19:05:09.111411 kubelet[2427]: W0213 19:05:09.111027 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.111411 kubelet[2427]: E0213 19:05:09.111076 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.112631 kubelet[2427]: E0213 19:05:09.112582 2427 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:05:09.113621 kubelet[2427]: I0213 19:05:09.113587 2427 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:05:09.113700 kubelet[2427]: I0213 19:05:09.113690 2427 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:05:09.114003 kubelet[2427]: I0213 19:05:09.113973 2427 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:05:09.126142 kubelet[2427]: I0213 19:05:09.126083 2427 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:05:09.127431 kubelet[2427]: I0213 19:05:09.127396 2427 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:05:09.127572 kubelet[2427]: I0213 19:05:09.127557 2427 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:05:09.127614 kubelet[2427]: I0213 19:05:09.127594 2427 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:05:09.127676 kubelet[2427]: E0213 19:05:09.127654 2427 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:05:09.128562 kubelet[2427]: W0213 19:05:09.128521 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.128562 kubelet[2427]: E0213 19:05:09.128561 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.133910 kubelet[2427]: I0213 19:05:09.133889 2427 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:05:09.133910 kubelet[2427]: I0213 19:05:09.133907 2427 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:05:09.134007 kubelet[2427]: I0213 19:05:09.133927 2427 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:05:09.211184 kubelet[2427]: I0213 19:05:09.211142 2427 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:05:09.211500 kubelet[2427]: E0213 19:05:09.211459 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Feb 13 19:05:09.227779 kubelet[2427]: E0213 19:05:09.227738 2427 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:05:09.305478 kubelet[2427]: I0213 19:05:09.305444 2427 policy_none.go:49] "None policy: Start" Feb 13 19:05:09.306407 kubelet[2427]: I0213 19:05:09.306320 2427 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:05:09.306407 kubelet[2427]: I0213 19:05:09.306350 2427 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:05:09.311491 kubelet[2427]: E0213 19:05:09.311455 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" Feb 13 19:05:09.311491 kubelet[2427]: I0213 19:05:09.312033 2427 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:05:09.311491 kubelet[2427]: I0213 19:05:09.312207 2427 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:05:09.311491 kubelet[2427]: I0213 19:05:09.312302 2427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:05:09.313812 kubelet[2427]: E0213 19:05:09.313791 2427 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:05:09.413212 kubelet[2427]: I0213 19:05:09.413178 2427 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:05:09.413511 kubelet[2427]: E0213 19:05:09.413473 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Feb 13 19:05:09.428678 kubelet[2427]: I0213 19:05:09.428614 2427 topology_manager.go:215] "Topology Admit Handler" podUID="e6f6f230395c44a43b472c085aa7f24c" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:05:09.429842 kubelet[2427]: I0213 19:05:09.429820 2427 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:05:09.430593 kubelet[2427]: I0213 19:05:09.430545 2427 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:05:09.511314 kubelet[2427]: I0213 19:05:09.511269 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:05:09.511314 kubelet[2427]: I0213 19:05:09.511311 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6f6f230395c44a43b472c085aa7f24c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6f6f230395c44a43b472c085aa7f24c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:09.511464 kubelet[2427]: I0213 19:05:09.511334 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6f6f230395c44a43b472c085aa7f24c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6f6f230395c44a43b472c085aa7f24c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:09.511464 kubelet[2427]: I0213 19:05:09.511353 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6f6f230395c44a43b472c085aa7f24c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6f6f230395c44a43b472c085aa7f24c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:09.511464 kubelet[2427]: I0213 19:05:09.511371 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:09.511464 kubelet[2427]: I0213 19:05:09.511389 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:09.511464 kubelet[2427]: I0213 19:05:09.511408 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:09.511583 kubelet[2427]: I0213 19:05:09.511422 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:09.511583 kubelet[2427]: I0213 19:05:09.511436 2427 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:09.712779 kubelet[2427]: E0213 19:05:09.712622 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" Feb 13 19:05:09.735045 kubelet[2427]: E0213 19:05:09.734987 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:09.735147 kubelet[2427]: E0213 19:05:09.735083 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:09.736633 kubelet[2427]: E0213 19:05:09.736604 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:09.747100 containerd[1566]: time="2025-02-13T19:05:09.747049991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:09.747452 containerd[1566]: time="2025-02-13T19:05:09.747097009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:09.747452 containerd[1566]: time="2025-02-13T19:05:09.747244548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6f6f230395c44a43b472c085aa7f24c,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:09.815716 kubelet[2427]: I0213 19:05:09.815671 2427 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:05:09.816146 kubelet[2427]: E0213 19:05:09.816120 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Feb 13 19:05:09.989225 kubelet[2427]: W0213 19:05:09.989110 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:09.989225 kubelet[2427]: E0213 19:05:09.989154 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:10.142602 kubelet[2427]: W0213 19:05:10.142532 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:10.142602 kubelet[2427]: E0213 19:05:10.142577 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:10.263935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591461623.mount: Deactivated successfully. Feb 13 19:05:10.272623 containerd[1566]: time="2025-02-13T19:05:10.272548497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:10.275600 containerd[1566]: time="2025-02-13T19:05:10.275537218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:10.276266 containerd[1566]: time="2025-02-13T19:05:10.276212813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:05:10.276785 containerd[1566]: time="2025-02-13T19:05:10.276622515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:05:10.277889 containerd[1566]: time="2025-02-13T19:05:10.277859346Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:10.279655 containerd[1566]: time="2025-02-13T19:05:10.279519844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:10.280292 containerd[1566]: time="2025-02-13T19:05:10.280058351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:05:10.285375 containerd[1566]: time="2025-02-13T19:05:10.285297935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:10.286460 containerd[1566]: time="2025-02-13T19:05:10.286170999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.893519ms" Feb 13 19:05:10.287235 containerd[1566]: time="2025-02-13T19:05:10.286912417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.752783ms" Feb 13 19:05:10.288614 containerd[1566]: time="2025-02-13T19:05:10.288477442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.343099ms" Feb 13 19:05:10.455472 containerd[1566]: time="2025-02-13T19:05:10.455345132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:10.455999 containerd[1566]: time="2025-02-13T19:05:10.455947461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:10.456177 containerd[1566]: time="2025-02-13T19:05:10.456137327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:10.456385 containerd[1566]: time="2025-02-13T19:05:10.456340278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:10.459884 containerd[1566]: time="2025-02-13T19:05:10.459348045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:10.459884 containerd[1566]: time="2025-02-13T19:05:10.459779355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:10.459884 containerd[1566]: time="2025-02-13T19:05:10.459801043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:10.460010 containerd[1566]: time="2025-02-13T19:05:10.459889113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:10.461464 containerd[1566]: time="2025-02-13T19:05:10.461393277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:10.461464 containerd[1566]: time="2025-02-13T19:05:10.461444215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:10.461464 containerd[1566]: time="2025-02-13T19:05:10.461456059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:10.461687 containerd[1566]: time="2025-02-13T19:05:10.461625918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:10.514341 kubelet[2427]: W0213 19:05:10.512543 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:10.514341 kubelet[2427]: E0213 19:05:10.512628 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:10.514341 kubelet[2427]: E0213 19:05:10.513032 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="1.6s" Feb 13 19:05:10.514533 containerd[1566]: time="2025-02-13T19:05:10.513863343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"d874eb0e969f88be93c5faf456c958cd0704f667ee5781516599d4e435936eeb\"" Feb 13 19:05:10.514946 kubelet[2427]: E0213 19:05:10.514925 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:10.517822 containerd[1566]: time="2025-02-13T19:05:10.517782667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6f6f230395c44a43b472c085aa7f24c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a67bf99ddb690ca3bee2cda47c768b1cf3975a836aac67fa1234ec3234792d09\"" Feb 13 19:05:10.522000 kubelet[2427]: E0213 19:05:10.518542 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:10.524018 containerd[1566]: time="2025-02-13T19:05:10.523977344Z" level=info msg="CreateContainer within sandbox \"a67bf99ddb690ca3bee2cda47c768b1cf3975a836aac67fa1234ec3234792d09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:05:10.524142 containerd[1566]: time="2025-02-13T19:05:10.524111030Z" level=info msg="CreateContainer within sandbox \"d874eb0e969f88be93c5faf456c958cd0704f667ee5781516599d4e435936eeb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:05:10.526507 containerd[1566]: time="2025-02-13T19:05:10.525105456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"5487faa7bb000413159da5476399870891ff89631c91e9715ce2c496e2be1158\"" Feb 13 19:05:10.526584 kubelet[2427]: E0213 19:05:10.525843 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:10.528588 containerd[1566]: time="2025-02-13T19:05:10.528327298Z" level=info msg="CreateContainer within sandbox \"5487faa7bb000413159da5476399870891ff89631c91e9715ce2c496e2be1158\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:05:10.541183 containerd[1566]: time="2025-02-13T19:05:10.541101625Z" level=info msg="CreateContainer within sandbox \"d874eb0e969f88be93c5faf456c958cd0704f667ee5781516599d4e435936eeb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b51f1a12fed09ed4aa1442e07dd89eb9047c8630bc12620a5a6c26330c916c0\"" Feb 13 19:05:10.541906 containerd[1566]: time="2025-02-13T19:05:10.541880576Z" level=info msg="StartContainer for \"0b51f1a12fed09ed4aa1442e07dd89eb9047c8630bc12620a5a6c26330c916c0\"" Feb 13 19:05:10.543069 containerd[1566]: time="2025-02-13T19:05:10.542978358Z" level=info msg="CreateContainer within sandbox \"a67bf99ddb690ca3bee2cda47c768b1cf3975a836aac67fa1234ec3234792d09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b72c9fed0a2327a36cf57b7f8c1eb94b7c3b67c0dc83d283b4bd44b91c82b019\"" Feb 13 19:05:10.543465 containerd[1566]: time="2025-02-13T19:05:10.543440119Z" level=info msg="StartContainer for \"b72c9fed0a2327a36cf57b7f8c1eb94b7c3b67c0dc83d283b4bd44b91c82b019\"" Feb 13 19:05:10.549919 containerd[1566]: time="2025-02-13T19:05:10.549786888Z" level=info msg="CreateContainer within sandbox \"5487faa7bb000413159da5476399870891ff89631c91e9715ce2c496e2be1158\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6987312848bc6a7fbd8e6ecff89a60127f47a8c3da4b01ef6302a2add39da266\"" Feb 13 19:05:10.550438 containerd[1566]: time="2025-02-13T19:05:10.550404823Z" level=info msg="StartContainer for \"6987312848bc6a7fbd8e6ecff89a60127f47a8c3da4b01ef6302a2add39da266\"" Feb 13 19:05:10.608577 containerd[1566]: time="2025-02-13T19:05:10.608535740Z" level=info msg="StartContainer for \"0b51f1a12fed09ed4aa1442e07dd89eb9047c8630bc12620a5a6c26330c916c0\" returns successfully" Feb 13 19:05:10.617986 kubelet[2427]: I0213 19:05:10.617817 2427 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:05:10.618598 kubelet[2427]: E0213 19:05:10.618574 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Feb 13 19:05:10.632590 containerd[1566]: time="2025-02-13T19:05:10.632461309Z" level=info msg="StartContainer for \"b72c9fed0a2327a36cf57b7f8c1eb94b7c3b67c0dc83d283b4bd44b91c82b019\" returns successfully" Feb 13 19:05:10.632590 containerd[1566]: time="2025-02-13T19:05:10.632542057Z" level=info msg="StartContainer for \"6987312848bc6a7fbd8e6ecff89a60127f47a8c3da4b01ef6302a2add39da266\" returns successfully" Feb 13 19:05:10.649515 kubelet[2427]: W0213 19:05:10.649457 2427 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:10.649636 kubelet[2427]: E0213 19:05:10.649617 2427 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Feb 13 19:05:11.135009 kubelet[2427]: E0213 19:05:11.134983 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:11.141815 kubelet[2427]: E0213 19:05:11.141790 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:11.143638 kubelet[2427]: E0213 19:05:11.143615 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:12.146066 kubelet[2427]: E0213 19:05:12.146033 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:12.221410 kubelet[2427]: I0213 19:05:12.221349 2427 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:05:12.727201 kubelet[2427]: E0213 19:05:12.727147 2427 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:05:12.825126 kubelet[2427]: I0213 19:05:12.824477 2427 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:05:12.836959 kubelet[2427]: E0213 19:05:12.836921 2427 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:05:12.937490 kubelet[2427]: E0213 19:05:12.937452 2427 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:05:13.037965 kubelet[2427]: E0213 19:05:13.037924 2427 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:05:13.138170 kubelet[2427]: E0213 19:05:13.138132 2427 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:05:14.092365 kubelet[2427]: I0213 19:05:14.092259 2427 apiserver.go:52] "Watching apiserver" Feb 13 19:05:14.116387 kubelet[2427]: I0213 19:05:14.110467 2427 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:05:14.161191 kubelet[2427]: E0213 19:05:14.161014 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:14.522083 systemd[1]: Reloading requested from client PID 2706 ('systemctl') (unit session-7.scope)... Feb 13 19:05:14.522098 systemd[1]: Reloading... Feb 13 19:05:14.589780 zram_generator::config[2744]: No configuration found. Feb 13 19:05:14.681893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:05:14.736724 systemd[1]: Reloading finished in 214 ms. Feb 13 19:05:14.762912 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:14.785743 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:05:14.786098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:14.794071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:14.885927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:14.889949 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:05:14.930672 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:05:14.930672 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:05:14.930672 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:05:14.931106 kubelet[2797]: I0213 19:05:14.930721 2797 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:05:14.937767 kubelet[2797]: I0213 19:05:14.937717 2797 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:05:14.937767 kubelet[2797]: I0213 19:05:14.937743 2797 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:05:14.937978 kubelet[2797]: I0213 19:05:14.937933 2797 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:05:14.939520 kubelet[2797]: I0213 19:05:14.939324 2797 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:05:14.941478 kubelet[2797]: I0213 19:05:14.941454 2797 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:05:14.948464 kubelet[2797]: I0213 19:05:14.948425 2797 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:05:14.948970 kubelet[2797]: I0213 19:05:14.948890 2797 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:05:14.950592 kubelet[2797]: I0213 19:05:14.948925 2797 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:05:14.950592 kubelet[2797]: I0213 19:05:14.949095 2797 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:05:14.950592 kubelet[2797]: I0213 19:05:14.949105 2797 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:05:14.950592 kubelet[2797]: I0213 19:05:14.949140 2797 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:05:14.950592 kubelet[2797]: I0213 19:05:14.949250 2797 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:05:14.950819 kubelet[2797]: I0213 19:05:14.949263 2797 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:05:14.950819 kubelet[2797]: I0213 19:05:14.949290 2797 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:05:14.950819 kubelet[2797]: I0213 19:05:14.949305 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:05:14.950819 kubelet[2797]: I0213 19:05:14.950035 2797 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:05:14.953765 kubelet[2797]: I0213 19:05:14.951553 2797 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:05:14.953765 kubelet[2797]: I0213 19:05:14.952085 2797 server.go:1264] "Started kubelet" Feb 13 19:05:14.957099 kubelet[2797]: I0213 19:05:14.955773 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:05:14.958641 kubelet[2797]: I0213 19:05:14.958599 2797 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:05:14.958799 kubelet[2797]: I0213 19:05:14.958764 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:05:14.959056 kubelet[2797]: I0213 19:05:14.959031 2797 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:05:14.959895 kubelet[2797]: I0213 19:05:14.959876 2797 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:05:14.961221 kubelet[2797]: I0213 19:05:14.960930 2797 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:05:14.961221 kubelet[2797]: I0213 19:05:14.961103 2797 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:05:14.961305 kubelet[2797]: I0213 19:05:14.961288 2797 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:05:14.961672 kubelet[2797]: I0213 19:05:14.961656 2797 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:05:14.962049 kubelet[2797]: I0213 19:05:14.962027 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:05:14.963326 kubelet[2797]: E0213 19:05:14.963307 2797 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:05:14.973565 kubelet[2797]: I0213 19:05:14.971953 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:05:14.973565 kubelet[2797]: I0213 19:05:14.972806 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:05:14.973565 kubelet[2797]: I0213 19:05:14.972842 2797 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:05:14.973565 kubelet[2797]: I0213 19:05:14.972858 2797 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:05:14.973565 kubelet[2797]: E0213 19:05:14.972899 2797 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:05:14.980738 kubelet[2797]: I0213 19:05:14.980707 2797 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:05:15.024221 kubelet[2797]: I0213 19:05:15.024191 2797 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:05:15.024221 kubelet[2797]: I0213 19:05:15.024209 2797 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:05:15.024612 kubelet[2797]: I0213 19:05:15.024238 2797 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:05:15.024612 kubelet[2797]: I0213 19:05:15.024381 2797 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:05:15.024612 kubelet[2797]: I0213 19:05:15.024391 2797 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:05:15.024612 kubelet[2797]: I0213 19:05:15.024408 2797 policy_none.go:49] "None policy: Start" Feb 13 19:05:15.024971 kubelet[2797]: I0213 19:05:15.024954 2797 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:05:15.025000 kubelet[2797]: I0213 19:05:15.024981 2797 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:05:15.025148 kubelet[2797]: I0213 19:05:15.025134 2797 state_mem.go:75] "Updated machine memory state" Feb 13 19:05:15.026464 kubelet[2797]: I0213 19:05:15.026342 2797 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:05:15.026526 kubelet[2797]: I0213 19:05:15.026495 2797 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:05:15.026596 kubelet[2797]: I0213 19:05:15.026583 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:05:15.065120 kubelet[2797]: I0213 19:05:15.063876 2797 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:05:15.073171 kubelet[2797]: I0213 19:05:15.073117 2797 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:05:15.074106 kubelet[2797]: I0213 19:05:15.073330 2797 topology_manager.go:215] "Topology Admit Handler" podUID="e6f6f230395c44a43b472c085aa7f24c" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:05:15.074106 kubelet[2797]: I0213 19:05:15.073544 2797 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:05:15.074106 kubelet[2797]: I0213 19:05:15.073584 2797 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:05:15.074106 kubelet[2797]: I0213 19:05:15.074026 2797 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:05:15.080502 kubelet[2797]: E0213 19:05:15.080450 2797 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:15.161832 kubelet[2797]: I0213 19:05:15.161787 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6f6f230395c44a43b472c085aa7f24c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6f6f230395c44a43b472c085aa7f24c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:15.161832 kubelet[2797]: I0213 19:05:15.161828 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:15.161990 kubelet[2797]: I0213 19:05:15.161847 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:15.161990 kubelet[2797]: I0213 19:05:15.161864 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:15.161990 kubelet[2797]: I0213 19:05:15.161886 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:15.161990 kubelet[2797]: I0213 19:05:15.161906 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6f6f230395c44a43b472c085aa7f24c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6f6f230395c44a43b472c085aa7f24c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:15.161990 kubelet[2797]: I0213 19:05:15.161922 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6f6f230395c44a43b472c085aa7f24c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6f6f230395c44a43b472c085aa7f24c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:15.162099 kubelet[2797]: I0213 19:05:15.161937 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:15.162099 kubelet[2797]: I0213 19:05:15.161952 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:05:15.379018 kubelet[2797]: E0213 19:05:15.378828 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:15.381056 kubelet[2797]: E0213 19:05:15.380988 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:15.381471 kubelet[2797]: E0213 19:05:15.381438 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:15.526556 sudo[2830]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:05:15.526844 sudo[2830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:05:15.950379 kubelet[2797]: I0213 19:05:15.950334 2797 apiserver.go:52] "Watching apiserver" Feb 13 19:05:15.953485 sudo[2830]: pam_unix(sudo:session): session closed for user root Feb 13 19:05:15.962068 kubelet[2797]: I0213 19:05:15.962037 2797 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:05:16.002692 kubelet[2797]: E0213 19:05:16.002557 2797 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:05:16.002824 kubelet[2797]: E0213 19:05:16.002810 2797 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:05:16.003215 kubelet[2797]: E0213 19:05:16.003197 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:16.004797 kubelet[2797]: E0213 19:05:16.004522 2797 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:05:16.005182 kubelet[2797]: E0213 19:05:16.005166 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:16.007847 kubelet[2797]: E0213 19:05:16.006907 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:16.022256 kubelet[2797]: I0213 19:05:16.022127 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.022111871 podStartE2EDuration="1.022111871s" podCreationTimestamp="2025-02-13 19:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:16.019852284 +0000 UTC m=+1.126038143" watchObservedRunningTime="2025-02-13 19:05:16.022111871 +0000 UTC m=+1.128297730" Feb 13 19:05:16.029049 kubelet[2797]: I0213 19:05:16.028906 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.028857306 podStartE2EDuration="2.028857306s" podCreationTimestamp="2025-02-13 19:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:16.028669157 +0000 UTC m=+1.134855016" watchObservedRunningTime="2025-02-13 19:05:16.028857306 +0000 UTC m=+1.135043125" Feb 13 19:05:16.040356 kubelet[2797]: I0213 19:05:16.040284 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.040268257 podStartE2EDuration="1.040268257s" podCreationTimestamp="2025-02-13 19:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:16.039795664 +0000 UTC m=+1.145981563" watchObservedRunningTime="2025-02-13 19:05:16.040268257 +0000 UTC m=+1.146454116" Feb 13 19:05:17.000057 kubelet[2797]: E0213 19:05:16.998864 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:17.000057 kubelet[2797]: E0213 19:05:16.998959 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:17.000057 kubelet[2797]: E0213 19:05:16.999380 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:18.001015 kubelet[2797]: E0213 19:05:18.000968 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:18.322007 sudo[1779]: pam_unix(sudo:session): session closed for user root Feb 13 19:05:18.323198 sshd[1778]: Connection closed by 10.0.0.1 port 56812 Feb 13 19:05:18.323923 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:18.326387 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:56812.service: Deactivated successfully. Feb 13 19:05:18.329624 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:05:18.329967 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:05:18.331483 systemd-logind[1550]: Removed session 7. Feb 13 19:05:23.588256 kubelet[2797]: E0213 19:05:23.588213 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:24.012309 kubelet[2797]: E0213 19:05:24.012056 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:25.613628 kubelet[2797]: E0213 19:05:25.613580 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:26.014063 kubelet[2797]: E0213 19:05:26.013862 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:26.146338 update_engine[1552]: I20250213 19:05:26.146253 1552 update_attempter.cc:509] Updating boot flags... Feb 13 19:05:26.180786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2882) Feb 13 19:05:26.185460 kubelet[2797]: E0213 19:05:26.185365 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:26.212789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2879) Feb 13 19:05:31.057100 kubelet[2797]: I0213 19:05:31.057031 2797 topology_manager.go:215] "Topology Admit Handler" podUID="bb368f3c-6446-4922-a478-0e018ea6bff8" podNamespace="kube-system" podName="kube-proxy-qldd7" Feb 13 19:05:31.071937 kubelet[2797]: I0213 19:05:31.070621 2797 topology_manager.go:215] "Topology Admit Handler" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" podNamespace="kube-system" podName="cilium-mds6r" Feb 13 19:05:31.081257 kubelet[2797]: I0213 19:05:31.080366 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb368f3c-6446-4922-a478-0e018ea6bff8-kube-proxy\") pod \"kube-proxy-qldd7\" (UID: \"bb368f3c-6446-4922-a478-0e018ea6bff8\") " pod="kube-system/kube-proxy-qldd7" Feb 13 19:05:31.081257 kubelet[2797]: I0213 19:05:31.080403 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb368f3c-6446-4922-a478-0e018ea6bff8-xtables-lock\") pod \"kube-proxy-qldd7\" (UID: \"bb368f3c-6446-4922-a478-0e018ea6bff8\") " pod="kube-system/kube-proxy-qldd7" Feb 13 19:05:31.081257 kubelet[2797]: I0213 19:05:31.080423 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb368f3c-6446-4922-a478-0e018ea6bff8-lib-modules\") pod \"kube-proxy-qldd7\" (UID: \"bb368f3c-6446-4922-a478-0e018ea6bff8\") " pod="kube-system/kube-proxy-qldd7" Feb 13 19:05:31.081257 kubelet[2797]: I0213 19:05:31.080443 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9stp\" (UniqueName: \"kubernetes.io/projected/bb368f3c-6446-4922-a478-0e018ea6bff8-kube-api-access-l9stp\") pod \"kube-proxy-qldd7\" (UID: \"bb368f3c-6446-4922-a478-0e018ea6bff8\") " pod="kube-system/kube-proxy-qldd7" Feb 13 19:05:31.126346 kubelet[2797]: I0213 19:05:31.126315 2797 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:05:31.133317 containerd[1566]: time="2025-02-13T19:05:31.133242370Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:05:31.133714 kubelet[2797]: I0213 19:05:31.133591 2797 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:05:31.181218 kubelet[2797]: I0213 19:05:31.181161 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cni-path\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.181218 kubelet[2797]: I0213 19:05:31.181218 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rj9\" (UniqueName: \"kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-kube-api-access-t9rj9\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.181364 kubelet[2797]: I0213 19:05:31.181241 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-xtables-lock\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186459 kubelet[2797]: I0213 19:05:31.186408 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-bpf-maps\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186530 kubelet[2797]: I0213 19:05:31.186488 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-lib-modules\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186530 kubelet[2797]: I0213 19:05:31.186507 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ba56aad-2afe-43b8-878f-a6b87a22d540-clustermesh-secrets\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186587 kubelet[2797]: I0213 19:05:31.186537 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-kernel\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186587 kubelet[2797]: I0213 19:05:31.186555 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-net\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186587 kubelet[2797]: I0213 19:05:31.186571 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-run\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186652 kubelet[2797]: I0213 19:05:31.186589 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-hostproc\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186652 kubelet[2797]: I0213 19:05:31.186605 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-etc-cni-netd\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186652 kubelet[2797]: I0213 19:05:31.186625 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-cgroup\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186652 kubelet[2797]: I0213 19:05:31.186641 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-config-path\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.186735 kubelet[2797]: I0213 19:05:31.186658 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-hubble-tls\") pod \"cilium-mds6r\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " pod="kube-system/cilium-mds6r" Feb 13 19:05:31.194488 kubelet[2797]: E0213 19:05:31.194453 2797 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:05:31.194488 kubelet[2797]: E0213 19:05:31.194492 2797 projected.go:200] Error preparing data for projected volume kube-api-access-l9stp for pod kube-system/kube-proxy-qldd7: configmap "kube-root-ca.crt" not found Feb 13 19:05:31.194595 kubelet[2797]: E0213 19:05:31.194570 2797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bb368f3c-6446-4922-a478-0e018ea6bff8-kube-api-access-l9stp podName:bb368f3c-6446-4922-a478-0e018ea6bff8 nodeName:}" failed. No retries permitted until 2025-02-13 19:05:31.694540511 +0000 UTC m=+16.800726370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l9stp" (UniqueName: "kubernetes.io/projected/bb368f3c-6446-4922-a478-0e018ea6bff8-kube-api-access-l9stp") pod "kube-proxy-qldd7" (UID: "bb368f3c-6446-4922-a478-0e018ea6bff8") : configmap "kube-root-ca.crt" not found Feb 13 19:05:31.296421 kubelet[2797]: E0213 19:05:31.296378 2797 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:05:31.296421 kubelet[2797]: E0213 19:05:31.296416 2797 projected.go:200] Error preparing data for projected volume kube-api-access-t9rj9 for pod kube-system/cilium-mds6r: configmap "kube-root-ca.crt" not found Feb 13 19:05:31.296574 kubelet[2797]: E0213 19:05:31.296463 2797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-kube-api-access-t9rj9 podName:9ba56aad-2afe-43b8-878f-a6b87a22d540 nodeName:}" failed. No retries permitted until 2025-02-13 19:05:31.796445647 +0000 UTC m=+16.902631506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t9rj9" (UniqueName: "kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-kube-api-access-t9rj9") pod "cilium-mds6r" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540") : configmap "kube-root-ca.crt" not found Feb 13 19:05:31.702228 kubelet[2797]: I0213 19:05:31.698812 2797 topology_manager.go:215] "Topology Admit Handler" podUID="a5db5df6-61e9-4793-a2cc-3e041194bdf9" podNamespace="kube-system" podName="cilium-operator-599987898-n8z45" Feb 13 19:05:31.791398 kubelet[2797]: I0213 19:05:31.791263 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5db5df6-61e9-4793-a2cc-3e041194bdf9-cilium-config-path\") pod \"cilium-operator-599987898-n8z45\" (UID: \"a5db5df6-61e9-4793-a2cc-3e041194bdf9\") " pod="kube-system/cilium-operator-599987898-n8z45" Feb 13 19:05:31.791398 kubelet[2797]: I0213 19:05:31.791350 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmfx\" (UniqueName: \"kubernetes.io/projected/a5db5df6-61e9-4793-a2cc-3e041194bdf9-kube-api-access-5tmfx\") pod \"cilium-operator-599987898-n8z45\" (UID: \"a5db5df6-61e9-4793-a2cc-3e041194bdf9\") " pod="kube-system/cilium-operator-599987898-n8z45" Feb 13 19:05:31.971480 kubelet[2797]: E0213 19:05:31.970974 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:31.981773 containerd[1566]: time="2025-02-13T19:05:31.981708815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qldd7,Uid:bb368f3c-6446-4922-a478-0e018ea6bff8,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:31.987157 kubelet[2797]: E0213 19:05:31.987112 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:31.987786 containerd[1566]: time="2025-02-13T19:05:31.987603312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mds6r,Uid:9ba56aad-2afe-43b8-878f-a6b87a22d540,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:32.002462 containerd[1566]: time="2025-02-13T19:05:32.002374553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:32.002462 containerd[1566]: time="2025-02-13T19:05:32.002431357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:32.002462 containerd[1566]: time="2025-02-13T19:05:32.002443678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:32.002609 containerd[1566]: time="2025-02-13T19:05:32.002530764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:32.007936 kubelet[2797]: E0213 19:05:32.007523 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:32.008097 containerd[1566]: time="2025-02-13T19:05:32.008058818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n8z45,Uid:a5db5df6-61e9-4793-a2cc-3e041194bdf9,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:32.011799 containerd[1566]: time="2025-02-13T19:05:32.011676862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:32.011799 containerd[1566]: time="2025-02-13T19:05:32.011733506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:32.011799 containerd[1566]: time="2025-02-13T19:05:32.011745787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:32.012105 containerd[1566]: time="2025-02-13T19:05:32.012063849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:32.047817 containerd[1566]: time="2025-02-13T19:05:32.047726901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mds6r,Uid:9ba56aad-2afe-43b8-878f-a6b87a22d540,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\"" Feb 13 19:05:32.048985 containerd[1566]: time="2025-02-13T19:05:32.048953224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qldd7,Uid:bb368f3c-6446-4922-a478-0e018ea6bff8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7319f79c64100cbfbd2396c3f1af9582d491a1a8b539cd904316acbf79e8f9b5\"" Feb 13 19:05:32.051825 kubelet[2797]: E0213 19:05:32.050233 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:32.051825 kubelet[2797]: E0213 19:05:32.050986 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:32.052311 containerd[1566]: time="2025-02-13T19:05:32.052224165Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:05:32.053973 containerd[1566]: time="2025-02-13T19:05:32.053942682Z" level=info msg="CreateContainer within sandbox \"7319f79c64100cbfbd2396c3f1af9582d491a1a8b539cd904316acbf79e8f9b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:05:32.068323 containerd[1566]: time="2025-02-13T19:05:32.068096439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:32.069504 containerd[1566]: time="2025-02-13T19:05:32.068890373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:32.069504 containerd[1566]: time="2025-02-13T19:05:32.068915295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:32.070016 containerd[1566]: time="2025-02-13T19:05:32.069946204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:32.079176 containerd[1566]: time="2025-02-13T19:05:32.079115185Z" level=info msg="CreateContainer within sandbox \"7319f79c64100cbfbd2396c3f1af9582d491a1a8b539cd904316acbf79e8f9b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"27970b99f0dd903666fe4e6b2a8460912926483567c085825bd08ecabf3adf78\"" Feb 13 19:05:32.083248 containerd[1566]: time="2025-02-13T19:05:32.083217942Z" level=info msg="StartContainer for \"27970b99f0dd903666fe4e6b2a8460912926483567c085825bd08ecabf3adf78\"" Feb 13 19:05:32.118449 containerd[1566]: time="2025-02-13T19:05:32.118384321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-n8z45,Uid:a5db5df6-61e9-4793-a2cc-3e041194bdf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81\"" Feb 13 19:05:32.119265 kubelet[2797]: E0213 19:05:32.119040 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:32.148198 containerd[1566]: time="2025-02-13T19:05:32.148158536Z" level=info msg="StartContainer for \"27970b99f0dd903666fe4e6b2a8460912926483567c085825bd08ecabf3adf78\" returns successfully" Feb 13 19:05:33.035923 kubelet[2797]: E0213 19:05:33.034913 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:33.127188 kubelet[2797]: I0213 19:05:33.126902 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qldd7" podStartSLOduration=2.124829806 podStartE2EDuration="2.124829806s" podCreationTimestamp="2025-02-13 19:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:33.124676196 +0000 UTC m=+18.230862055" watchObservedRunningTime="2025-02-13 19:05:33.124829806 +0000 UTC m=+18.231015665" Feb 13 19:05:35.041485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169069673.mount: Deactivated successfully. Feb 13 19:05:36.471114 containerd[1566]: time="2025-02-13T19:05:36.471063381Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:36.472138 containerd[1566]: time="2025-02-13T19:05:36.471926630Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:05:36.472813 containerd[1566]: time="2025-02-13T19:05:36.472780919Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:36.474571 containerd[1566]: time="2025-02-13T19:05:36.474483616Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.422217927s" Feb 13 19:05:36.474571 containerd[1566]: time="2025-02-13T19:05:36.474519298Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:05:36.484581 containerd[1566]: time="2025-02-13T19:05:36.484385299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:05:36.485774 containerd[1566]: time="2025-02-13T19:05:36.485717935Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:05:36.510787 containerd[1566]: time="2025-02-13T19:05:36.510675634Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\"" Feb 13 19:05:36.511638 containerd[1566]: time="2025-02-13T19:05:36.511595326Z" level=info msg="StartContainer for \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\"" Feb 13 19:05:36.560735 containerd[1566]: time="2025-02-13T19:05:36.560685037Z" level=info msg="StartContainer for \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\" returns successfully" Feb 13 19:05:36.722032 containerd[1566]: time="2025-02-13T19:05:36.721889323Z" level=info msg="shim disconnected" id=9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81 namespace=k8s.io Feb 13 19:05:36.722032 containerd[1566]: time="2025-02-13T19:05:36.721944926Z" level=warning msg="cleaning up after shim disconnected" id=9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81 namespace=k8s.io Feb 13 19:05:36.722032 containerd[1566]: time="2025-02-13T19:05:36.721953487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:37.053869 kubelet[2797]: E0213 19:05:37.053831 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:37.057480 containerd[1566]: time="2025-02-13T19:05:37.057401993Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:05:37.069837 containerd[1566]: time="2025-02-13T19:05:37.069800470Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\"" Feb 13 19:05:37.070262 containerd[1566]: time="2025-02-13T19:05:37.070236293Z" level=info msg="StartContainer for \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\"" Feb 13 19:05:37.125016 containerd[1566]: time="2025-02-13T19:05:37.124972880Z" level=info msg="StartContainer for \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\" returns successfully" Feb 13 19:05:37.141385 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:05:37.142008 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:05:37.142078 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:05:37.152010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:05:37.171381 containerd[1566]: time="2025-02-13T19:05:37.171293848Z" level=info msg="shim disconnected" id=9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca namespace=k8s.io Feb 13 19:05:37.171381 containerd[1566]: time="2025-02-13T19:05:37.171371212Z" level=warning msg="cleaning up after shim disconnected" id=9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca namespace=k8s.io Feb 13 19:05:37.171574 containerd[1566]: time="2025-02-13T19:05:37.171390973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:37.173803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:05:37.509690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81-rootfs.mount: Deactivated successfully. Feb 13 19:05:38.056856 kubelet[2797]: E0213 19:05:38.056570 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:38.062957 containerd[1566]: time="2025-02-13T19:05:38.062751683Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:05:38.094400 containerd[1566]: time="2025-02-13T19:05:38.094333859Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\"" Feb 13 19:05:38.094984 containerd[1566]: time="2025-02-13T19:05:38.094958332Z" level=info msg="StartContainer for \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\"" Feb 13 19:05:38.146605 containerd[1566]: time="2025-02-13T19:05:38.146564717Z" level=info msg="StartContainer for \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\" returns successfully" Feb 13 19:05:38.191355 containerd[1566]: time="2025-02-13T19:05:38.191288182Z" level=info msg="shim disconnected" id=9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9 namespace=k8s.io Feb 13 19:05:38.191355 containerd[1566]: time="2025-02-13T19:05:38.191344025Z" level=warning msg="cleaning up after shim disconnected" id=9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9 namespace=k8s.io Feb 13 19:05:38.191355 containerd[1566]: time="2025-02-13T19:05:38.191352265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:38.509142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9-rootfs.mount: Deactivated successfully. Feb 13 19:05:38.738692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579510350.mount: Deactivated successfully. Feb 13 19:05:39.062556 kubelet[2797]: E0213 19:05:39.061413 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:39.067248 containerd[1566]: time="2025-02-13T19:05:39.066844789Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:05:39.083465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560897584.mount: Deactivated successfully. Feb 13 19:05:39.084472 containerd[1566]: time="2025-02-13T19:05:39.084352912Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\"" Feb 13 19:05:39.085104 containerd[1566]: time="2025-02-13T19:05:39.085063988Z" level=info msg="StartContainer for \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\"" Feb 13 19:05:39.138799 containerd[1566]: time="2025-02-13T19:05:39.138178345Z" level=info msg="StartContainer for \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\" returns successfully" Feb 13 19:05:39.197578 containerd[1566]: time="2025-02-13T19:05:39.197500775Z" level=info msg="shim disconnected" id=418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28 namespace=k8s.io Feb 13 19:05:39.197578 containerd[1566]: time="2025-02-13T19:05:39.197568779Z" level=warning msg="cleaning up after shim disconnected" id=418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28 namespace=k8s.io Feb 13 19:05:39.197578 containerd[1566]: time="2025-02-13T19:05:39.197578859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:39.280279 containerd[1566]: time="2025-02-13T19:05:39.280237106Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:39.280698 containerd[1566]: time="2025-02-13T19:05:39.280652567Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:05:39.281532 containerd[1566]: time="2025-02-13T19:05:39.281486929Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:39.283360 containerd[1566]: time="2025-02-13T19:05:39.282845958Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.798425737s" Feb 13 19:05:39.283360 containerd[1566]: time="2025-02-13T19:05:39.282879439Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:05:39.285489 containerd[1566]: time="2025-02-13T19:05:39.285457969Z" level=info msg="CreateContainer within sandbox \"2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:05:39.295251 containerd[1566]: time="2025-02-13T19:05:39.295212101Z" level=info msg="CreateContainer within sandbox \"2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\"" Feb 13 19:05:39.295952 containerd[1566]: time="2025-02-13T19:05:39.295771449Z" level=info msg="StartContainer for \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\"" Feb 13 19:05:39.343571 containerd[1566]: time="2025-02-13T19:05:39.343466013Z" level=info msg="StartContainer for \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\" returns successfully" Feb 13 19:05:40.067980 kubelet[2797]: E0213 19:05:40.067940 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:40.073940 kubelet[2797]: E0213 19:05:40.073911 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:40.077609 containerd[1566]: time="2025-02-13T19:05:40.077565196Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:05:40.115273 containerd[1566]: time="2025-02-13T19:05:40.115226943Z" level=info msg="CreateContainer within sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\"" Feb 13 19:05:40.115952 containerd[1566]: time="2025-02-13T19:05:40.115928017Z" level=info msg="StartContainer for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\"" Feb 13 19:05:40.117580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281567339.mount: Deactivated successfully. Feb 13 19:05:40.128860 kubelet[2797]: I0213 19:05:40.128727 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-n8z45" podStartSLOduration=1.965810286 podStartE2EDuration="9.128691156s" podCreationTimestamp="2025-02-13 19:05:31 +0000 UTC" firstStartedPulling="2025-02-13 19:05:32.121064423 +0000 UTC m=+17.227250242" lastFinishedPulling="2025-02-13 19:05:39.283945253 +0000 UTC m=+24.390131112" observedRunningTime="2025-02-13 19:05:40.110260662 +0000 UTC m=+25.216446521" watchObservedRunningTime="2025-02-13 19:05:40.128691156 +0000 UTC m=+25.234876975" Feb 13 19:05:40.178112 containerd[1566]: time="2025-02-13T19:05:40.178070872Z" level=info msg="StartContainer for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" returns successfully" Feb 13 19:05:40.314319 kubelet[2797]: I0213 19:05:40.314001 2797 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:05:40.346846 kubelet[2797]: I0213 19:05:40.344548 2797 topology_manager.go:215] "Topology Admit Handler" podUID="b3b77ef7-01bd-4ce9-9a37-1a7c72864763" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jl99j" Feb 13 19:05:40.346846 kubelet[2797]: I0213 19:05:40.344770 2797 topology_manager.go:215] "Topology Admit Handler" podUID="65321192-5e88-4d75-8e73-c0e796d9b9b4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7tgnf" Feb 13 19:05:40.356621 kubelet[2797]: I0213 19:05:40.356572 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks442\" (UniqueName: \"kubernetes.io/projected/65321192-5e88-4d75-8e73-c0e796d9b9b4-kube-api-access-ks442\") pod \"coredns-7db6d8ff4d-7tgnf\" (UID: \"65321192-5e88-4d75-8e73-c0e796d9b9b4\") " pod="kube-system/coredns-7db6d8ff4d-7tgnf" Feb 13 19:05:40.356621 kubelet[2797]: I0213 19:05:40.356619 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpc74\" (UniqueName: \"kubernetes.io/projected/b3b77ef7-01bd-4ce9-9a37-1a7c72864763-kube-api-access-dpc74\") pod \"coredns-7db6d8ff4d-jl99j\" (UID: \"b3b77ef7-01bd-4ce9-9a37-1a7c72864763\") " pod="kube-system/coredns-7db6d8ff4d-jl99j" Feb 13 19:05:40.356820 kubelet[2797]: I0213 19:05:40.356659 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65321192-5e88-4d75-8e73-c0e796d9b9b4-config-volume\") pod \"coredns-7db6d8ff4d-7tgnf\" (UID: \"65321192-5e88-4d75-8e73-c0e796d9b9b4\") " pod="kube-system/coredns-7db6d8ff4d-7tgnf" Feb 13 19:05:40.356820 kubelet[2797]: I0213 19:05:40.356679 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3b77ef7-01bd-4ce9-9a37-1a7c72864763-config-volume\") pod \"coredns-7db6d8ff4d-jl99j\" (UID: \"b3b77ef7-01bd-4ce9-9a37-1a7c72864763\") " pod="kube-system/coredns-7db6d8ff4d-jl99j" Feb 13 19:05:40.511908 systemd[1]: run-containerd-runc-k8s.io-7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68-runc.Oa8Jyy.mount: Deactivated successfully. Feb 13 19:05:40.652431 kubelet[2797]: E0213 19:05:40.652312 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:40.653372 kubelet[2797]: E0213 19:05:40.652927 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:40.654081 containerd[1566]: time="2025-02-13T19:05:40.654001926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7tgnf,Uid:65321192-5e88-4d75-8e73-c0e796d9b9b4,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:40.654170 containerd[1566]: time="2025-02-13T19:05:40.654084330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jl99j,Uid:b3b77ef7-01bd-4ce9-9a37-1a7c72864763,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:41.079811 kubelet[2797]: E0213 19:05:41.079463 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:41.082290 kubelet[2797]: E0213 19:05:41.082232 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:42.081350 kubelet[2797]: E0213 19:05:42.081320 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:42.660981 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:58212.service - OpenSSH per-connection server daemon (10.0.0.1:58212). Feb 13 19:05:42.711158 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 58212 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:05:42.712543 sshd-session[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:42.717403 systemd-logind[1550]: New session 8 of user core. Feb 13 19:05:42.727991 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:05:42.853042 sshd[3647]: Connection closed by 10.0.0.1 port 58212 Feb 13 19:05:42.853568 sshd-session[3644]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:42.856709 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:58212.service: Deactivated successfully. Feb 13 19:05:42.858937 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:05:42.859010 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:05:42.862308 systemd-logind[1550]: Removed session 8. Feb 13 19:05:43.083306 kubelet[2797]: E0213 19:05:43.083270 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:43.190073 systemd-networkd[1230]: cilium_host: Link UP Feb 13 19:05:43.190196 systemd-networkd[1230]: cilium_net: Link UP Feb 13 19:05:43.190320 systemd-networkd[1230]: cilium_net: Gained carrier Feb 13 19:05:43.190446 systemd-networkd[1230]: cilium_host: Gained carrier Feb 13 19:05:43.278694 systemd-networkd[1230]: cilium_vxlan: Link UP Feb 13 19:05:43.280223 systemd-networkd[1230]: cilium_vxlan: Gained carrier Feb 13 19:05:43.575802 kernel: NET: Registered PF_ALG protocol family Feb 13 19:05:43.824858 systemd-networkd[1230]: cilium_net: Gained IPv6LL Feb 13 19:05:44.161290 systemd-networkd[1230]: lxc_health: Link UP Feb 13 19:05:44.175098 systemd-networkd[1230]: lxc_health: Gained carrier Feb 13 19:05:44.213273 systemd-networkd[1230]: cilium_host: Gained IPv6LL Feb 13 19:05:44.324195 systemd-networkd[1230]: lxc18506e163ca3: Link UP Feb 13 19:05:44.335778 kernel: eth0: renamed from tmp3d8e9 Feb 13 19:05:44.338457 systemd-networkd[1230]: lxc18506e163ca3: Gained carrier Feb 13 19:05:44.338965 systemd-networkd[1230]: lxc70341b217bb1: Link UP Feb 13 19:05:44.343879 kernel: eth0: renamed from tmpcf0eb Feb 13 19:05:44.355432 systemd-networkd[1230]: lxc70341b217bb1: Gained carrier Feb 13 19:05:44.649506 kubelet[2797]: E0213 19:05:44.649460 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:44.785149 systemd-networkd[1230]: cilium_vxlan: Gained IPv6LL Feb 13 19:05:45.553987 systemd-networkd[1230]: lxc70341b217bb1: Gained IPv6LL Feb 13 19:05:45.616927 systemd-networkd[1230]: lxc18506e163ca3: Gained IPv6LL Feb 13 19:05:45.872893 systemd-networkd[1230]: lxc_health: Gained IPv6LL Feb 13 19:05:46.000728 kubelet[2797]: E0213 19:05:46.000689 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:46.016728 kubelet[2797]: I0213 19:05:46.015913 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mds6r" podStartSLOduration=10.583240362 podStartE2EDuration="15.015897851s" podCreationTimestamp="2025-02-13 19:05:31 +0000 UTC" firstStartedPulling="2025-02-13 19:05:32.051585242 +0000 UTC m=+17.157771061" lastFinishedPulling="2025-02-13 19:05:36.484242691 +0000 UTC m=+21.590428550" observedRunningTime="2025-02-13 19:05:41.094965757 +0000 UTC m=+26.201151616" watchObservedRunningTime="2025-02-13 19:05:46.015897851 +0000 UTC m=+31.122083710" Feb 13 19:05:46.088121 kubelet[2797]: E0213 19:05:46.088090 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:47.863121 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:58222.service - OpenSSH per-connection server daemon (10.0.0.1:58222). Feb 13 19:05:47.910743 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 58222 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:05:47.912036 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:47.922614 systemd-logind[1550]: New session 9 of user core. Feb 13 19:05:47.926639 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:05:47.954994 containerd[1566]: time="2025-02-13T19:05:47.954469029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:47.954994 containerd[1566]: time="2025-02-13T19:05:47.954588073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:47.955487 containerd[1566]: time="2025-02-13T19:05:47.955318821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:47.955487 containerd[1566]: time="2025-02-13T19:05:47.955422905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:47.955487 containerd[1566]: time="2025-02-13T19:05:47.954985769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:47.955487 containerd[1566]: time="2025-02-13T19:05:47.955349063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:47.955487 containerd[1566]: time="2025-02-13T19:05:47.955362023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:47.955487 containerd[1566]: time="2025-02-13T19:05:47.955444666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:47.979092 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:05:47.985193 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:05:48.008080 containerd[1566]: time="2025-02-13T19:05:48.007969426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jl99j,Uid:b3b77ef7-01bd-4ce9-9a37-1a7c72864763,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf0ebb999fb01ec4a396571d07879ee5f01d80770b69d4af4ecba319bb9c3329\"" Feb 13 19:05:48.009928 kubelet[2797]: E0213 19:05:48.008717 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:48.012594 containerd[1566]: time="2025-02-13T19:05:48.011426834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7tgnf,Uid:65321192-5e88-4d75-8e73-c0e796d9b9b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d8e9c0087da2f76653bfa380ee076ba14aca53f2cf366f3e862ccf2e6d23a2b\"" Feb 13 19:05:48.013471 containerd[1566]: time="2025-02-13T19:05:48.013099937Z" level=info msg="CreateContainer within sandbox \"cf0ebb999fb01ec4a396571d07879ee5f01d80770b69d4af4ecba319bb9c3329\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:05:48.013545 kubelet[2797]: E0213 19:05:48.013256 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:48.015327 containerd[1566]: time="2025-02-13T19:05:48.015295778Z" level=info msg="CreateContainer within sandbox \"3d8e9c0087da2f76653bfa380ee076ba14aca53f2cf366f3e862ccf2e6d23a2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:05:48.040313 containerd[1566]: time="2025-02-13T19:05:48.040264625Z" level=info msg="CreateContainer within sandbox \"cf0ebb999fb01ec4a396571d07879ee5f01d80770b69d4af4ecba319bb9c3329\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8324aee9161e231082801dc1d9c27c827a835fa6a313f352bf72a488672ad4ad\"" Feb 13 19:05:48.042213 containerd[1566]: time="2025-02-13T19:05:48.040801804Z" level=info msg="StartContainer for \"8324aee9161e231082801dc1d9c27c827a835fa6a313f352bf72a488672ad4ad\"" Feb 13 19:05:48.045211 containerd[1566]: time="2025-02-13T19:05:48.045097444Z" level=info msg="CreateContainer within sandbox \"3d8e9c0087da2f76653bfa380ee076ba14aca53f2cf366f3e862ccf2e6d23a2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"252a136cd7df2cee330981d06d4fc3ca9d1f7e504fa97a3db9c9e31ca5c97148\"" Feb 13 19:05:48.045738 containerd[1566]: time="2025-02-13T19:05:48.045706586Z" level=info msg="StartContainer for \"252a136cd7df2cee330981d06d4fc3ca9d1f7e504fa97a3db9c9e31ca5c97148\"" Feb 13 19:05:48.083624 sshd[4044]: Connection closed by 10.0.0.1 port 58222 Feb 13 19:05:48.083119 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:48.087284 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:58222.service: Deactivated successfully. Feb 13 19:05:48.092633 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:05:48.094893 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:05:48.096737 systemd-logind[1550]: Removed session 9. Feb 13 19:05:48.111062 containerd[1566]: time="2025-02-13T19:05:48.111021210Z" level=info msg="StartContainer for \"8324aee9161e231082801dc1d9c27c827a835fa6a313f352bf72a488672ad4ad\" returns successfully" Feb 13 19:05:48.117146 containerd[1566]: time="2025-02-13T19:05:48.117037233Z" level=info msg="StartContainer for \"252a136cd7df2cee330981d06d4fc3ca9d1f7e504fa97a3db9c9e31ca5c97148\" returns successfully" Feb 13 19:05:48.960013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301499421.mount: Deactivated successfully. Feb 13 19:05:49.104815 kubelet[2797]: E0213 19:05:49.103979 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:49.107775 kubelet[2797]: E0213 19:05:49.107745 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:49.193095 kubelet[2797]: I0213 19:05:49.193014 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jl99j" podStartSLOduration=18.192995915 podStartE2EDuration="18.192995915s" podCreationTimestamp="2025-02-13 19:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:49.138311704 +0000 UTC m=+34.244497563" watchObservedRunningTime="2025-02-13 19:05:49.192995915 +0000 UTC m=+34.299181774" Feb 13 19:05:49.216668 kubelet[2797]: I0213 19:05:49.215632 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7tgnf" podStartSLOduration=18.215613571 podStartE2EDuration="18.215613571s" podCreationTimestamp="2025-02-13 19:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:49.209163218 +0000 UTC m=+34.315349077" watchObservedRunningTime="2025-02-13 19:05:49.215613571 +0000 UTC m=+34.321799390" Feb 13 19:05:50.108008 kubelet[2797]: E0213 19:05:50.107980 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:50.108415 kubelet[2797]: E0213 19:05:50.108021 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:51.109472 kubelet[2797]: E0213 19:05:51.109389 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:05:53.093361 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:34948.service - OpenSSH per-connection server daemon (10.0.0.1:34948). Feb 13 19:05:53.149155 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 34948 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:05:53.152425 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:53.158803 systemd-logind[1550]: New session 10 of user core. Feb 13 19:05:53.172138 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:05:53.300191 sshd[4232]: Connection closed by 10.0.0.1 port 34948 Feb 13 19:05:53.300648 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:53.311100 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:34958.service - OpenSSH per-connection server daemon (10.0.0.1:34958). Feb 13 19:05:53.311510 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:34948.service: Deactivated successfully. Feb 13 19:05:53.313330 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:05:53.314043 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:05:53.315253 systemd-logind[1550]: Removed session 10. Feb 13 19:05:53.355452 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 34958 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:05:53.356907 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:53.361307 systemd-logind[1550]: New session 11 of user core. Feb 13 19:05:53.371977 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:05:53.524509 sshd[4249]: Connection closed by 10.0.0.1 port 34958 Feb 13 19:05:53.524743 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:53.533020 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:34968.service - OpenSSH per-connection server daemon (10.0.0.1:34968). Feb 13 19:05:53.535083 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:34958.service: Deactivated successfully. Feb 13 19:05:53.538831 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:05:53.539455 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:05:53.544420 systemd-logind[1550]: Removed session 11. Feb 13 19:05:53.585824 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 34968 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:05:53.587388 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:53.591285 systemd-logind[1550]: New session 12 of user core. Feb 13 19:05:53.599113 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:05:53.718941 sshd[4263]: Connection closed by 10.0.0.1 port 34968 Feb 13 19:05:53.719462 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:53.722575 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:34968.service: Deactivated successfully. Feb 13 19:05:53.724814 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:05:53.725526 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:05:53.726313 systemd-logind[1550]: Removed session 12. Feb 13 19:05:58.734088 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:34984.service - OpenSSH per-connection server daemon (10.0.0.1:34984). Feb 13 19:05:58.777831 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 34984 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:05:58.778645 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:58.784706 systemd-logind[1550]: New session 13 of user core. Feb 13 19:05:58.792123 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:05:58.909916 sshd[4279]: Connection closed by 10.0.0.1 port 34984 Feb 13 19:05:58.910386 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:58.914443 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:34984.service: Deactivated successfully. Feb 13 19:05:58.916869 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:05:58.917005 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:05:58.918053 systemd-logind[1550]: Removed session 13. Feb 13 19:06:03.935026 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:33856.service - OpenSSH per-connection server daemon (10.0.0.1:33856). Feb 13 19:06:03.978836 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 33856 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:03.980349 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:03.984503 systemd-logind[1550]: New session 14 of user core. Feb 13 19:06:03.994017 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:06:04.104709 sshd[4298]: Connection closed by 10.0.0.1 port 33856 Feb 13 19:06:04.105336 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:04.114019 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:33862.service - OpenSSH per-connection server daemon (10.0.0.1:33862). Feb 13 19:06:04.114401 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:33856.service: Deactivated successfully. Feb 13 19:06:04.118057 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:06:04.118463 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:06:04.120499 systemd-logind[1550]: Removed session 14. Feb 13 19:06:04.156809 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 33862 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:04.158090 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:04.162417 systemd-logind[1550]: New session 15 of user core. Feb 13 19:06:04.174006 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:06:04.365206 sshd[4313]: Connection closed by 10.0.0.1 port 33862 Feb 13 19:06:04.366346 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:04.375001 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:33868.service - OpenSSH per-connection server daemon (10.0.0.1:33868). Feb 13 19:06:04.375381 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:33862.service: Deactivated successfully. Feb 13 19:06:04.378298 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:06:04.378373 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:06:04.379875 systemd-logind[1550]: Removed session 15. Feb 13 19:06:04.422574 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 33868 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:04.423906 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:04.427792 systemd-logind[1550]: New session 16 of user core. Feb 13 19:06:04.441062 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:06:05.573571 sshd[4326]: Connection closed by 10.0.0.1 port 33868 Feb 13 19:06:05.574884 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:05.585541 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:33878.service - OpenSSH per-connection server daemon (10.0.0.1:33878). Feb 13 19:06:05.585974 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:33868.service: Deactivated successfully. Feb 13 19:06:05.594046 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:06:05.596288 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:06:05.600690 systemd-logind[1550]: Removed session 16. Feb 13 19:06:05.638436 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 33878 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:05.639618 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:05.643790 systemd-logind[1550]: New session 17 of user core. Feb 13 19:06:05.656038 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:06:05.876498 sshd[4349]: Connection closed by 10.0.0.1 port 33878 Feb 13 19:06:05.876923 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:05.886066 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). Feb 13 19:06:05.886466 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:33878.service: Deactivated successfully. Feb 13 19:06:05.890802 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:06:05.891530 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:06:05.893660 systemd-logind[1550]: Removed session 17. Feb 13 19:06:05.930310 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:05.931643 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:05.935691 systemd-logind[1550]: New session 18 of user core. Feb 13 19:06:05.952318 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:06:06.065149 sshd[4363]: Connection closed by 10.0.0.1 port 33886 Feb 13 19:06:06.065795 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:06.070175 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:33886.service: Deactivated successfully. Feb 13 19:06:06.072829 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:06:06.073060 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:06:06.075737 systemd-logind[1550]: Removed session 18. Feb 13 19:06:11.081027 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:33898.service - OpenSSH per-connection server daemon (10.0.0.1:33898). Feb 13 19:06:11.130842 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 33898 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:11.131789 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:11.135964 systemd-logind[1550]: New session 19 of user core. Feb 13 19:06:11.146041 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:06:11.268916 sshd[4382]: Connection closed by 10.0.0.1 port 33898 Feb 13 19:06:11.268670 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:11.274643 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:06:11.274894 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:33898.service: Deactivated successfully. Feb 13 19:06:11.276639 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:06:11.282888 systemd-logind[1550]: Removed session 19. Feb 13 19:06:16.283022 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:49928.service - OpenSSH per-connection server daemon (10.0.0.1:49928). Feb 13 19:06:16.339279 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 49928 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:16.340682 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:16.346959 systemd-logind[1550]: New session 20 of user core. Feb 13 19:06:16.354100 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:06:16.467043 sshd[4400]: Connection closed by 10.0.0.1 port 49928 Feb 13 19:06:16.467415 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:16.471185 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:49928.service: Deactivated successfully. Feb 13 19:06:16.473502 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:06:16.473606 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:06:16.474664 systemd-logind[1550]: Removed session 20. Feb 13 19:06:21.477019 systemd[1]: Started sshd@20-10.0.0.61:22-10.0.0.1:49936.service - OpenSSH per-connection server daemon (10.0.0.1:49936). Feb 13 19:06:21.518469 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 49936 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:21.520090 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:21.524874 systemd-logind[1550]: New session 21 of user core. Feb 13 19:06:21.536062 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:06:21.663861 sshd[4415]: Connection closed by 10.0.0.1 port 49936 Feb 13 19:06:21.665660 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:21.678086 systemd[1]: Started sshd@21-10.0.0.61:22-10.0.0.1:49942.service - OpenSSH per-connection server daemon (10.0.0.1:49942). Feb 13 19:06:21.678484 systemd[1]: sshd@20-10.0.0.61:22-10.0.0.1:49936.service: Deactivated successfully. Feb 13 19:06:21.681320 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:06:21.682806 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:06:21.684353 systemd-logind[1550]: Removed session 21. Feb 13 19:06:21.723724 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 49942 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:21.725085 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:21.729823 systemd-logind[1550]: New session 22 of user core. Feb 13 19:06:21.737084 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:06:23.069579 containerd[1566]: time="2025-02-13T19:06:23.069464557Z" level=info msg="StopContainer for \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\" with timeout 30 (s)" Feb 13 19:06:23.070270 containerd[1566]: time="2025-02-13T19:06:23.070119828Z" level=info msg="Stop container \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\" with signal terminated" Feb 13 19:06:23.104066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002-rootfs.mount: Deactivated successfully. Feb 13 19:06:23.115621 containerd[1566]: time="2025-02-13T19:06:23.115577053Z" level=info msg="StopContainer for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" with timeout 2 (s)" Feb 13 19:06:23.116046 containerd[1566]: time="2025-02-13T19:06:23.116021728Z" level=info msg="Stop container \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" with signal terminated" Feb 13 19:06:23.116366 containerd[1566]: time="2025-02-13T19:06:23.116335444Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:06:23.119636 containerd[1566]: time="2025-02-13T19:06:23.119460164Z" level=info msg="shim disconnected" id=b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002 namespace=k8s.io Feb 13 19:06:23.119636 containerd[1566]: time="2025-02-13T19:06:23.119511044Z" level=warning msg="cleaning up after shim disconnected" id=b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002 namespace=k8s.io Feb 13 19:06:23.119636 containerd[1566]: time="2025-02-13T19:06:23.119519404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:23.121686 systemd-networkd[1230]: lxc_health: Link DOWN Feb 13 19:06:23.121692 systemd-networkd[1230]: lxc_health: Lost carrier Feb 13 19:06:23.163443 containerd[1566]: time="2025-02-13T19:06:23.163391569Z" level=info msg="StopContainer for \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\" returns successfully" Feb 13 19:06:23.165973 containerd[1566]: time="2025-02-13T19:06:23.165908417Z" level=info msg="StopPodSandbox for \"2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81\"" Feb 13 19:06:23.165973 containerd[1566]: time="2025-02-13T19:06:23.165959816Z" level=info msg="Container to stop \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:23.167166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68-rootfs.mount: Deactivated successfully. Feb 13 19:06:23.169865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81-shm.mount: Deactivated successfully. Feb 13 19:06:23.173088 containerd[1566]: time="2025-02-13T19:06:23.173024527Z" level=info msg="shim disconnected" id=7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68 namespace=k8s.io Feb 13 19:06:23.173171 containerd[1566]: time="2025-02-13T19:06:23.173091446Z" level=warning msg="cleaning up after shim disconnected" id=7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68 namespace=k8s.io Feb 13 19:06:23.173171 containerd[1566]: time="2025-02-13T19:06:23.173100526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:23.188835 containerd[1566]: time="2025-02-13T19:06:23.188790967Z" level=info msg="StopContainer for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" returns successfully" Feb 13 19:06:23.189274 containerd[1566]: time="2025-02-13T19:06:23.189245482Z" level=info msg="StopPodSandbox for \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\"" Feb 13 19:06:23.189502 containerd[1566]: time="2025-02-13T19:06:23.189362400Z" level=info msg="Container to stop \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:23.189502 containerd[1566]: time="2025-02-13T19:06:23.189382880Z" level=info msg="Container to stop \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:23.189502 containerd[1566]: time="2025-02-13T19:06:23.189393280Z" level=info msg="Container to stop \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:23.189502 containerd[1566]: time="2025-02-13T19:06:23.189401640Z" level=info msg="Container to stop \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:23.189502 containerd[1566]: time="2025-02-13T19:06:23.189410359Z" level=info msg="Container to stop \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:23.192369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa-shm.mount: Deactivated successfully. Feb 13 19:06:23.211569 containerd[1566]: time="2025-02-13T19:06:23.211462641Z" level=info msg="shim disconnected" id=2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81 namespace=k8s.io Feb 13 19:06:23.211569 containerd[1566]: time="2025-02-13T19:06:23.211522240Z" level=warning msg="cleaning up after shim disconnected" id=2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81 namespace=k8s.io Feb 13 19:06:23.211569 containerd[1566]: time="2025-02-13T19:06:23.211533720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:23.219471 containerd[1566]: time="2025-02-13T19:06:23.219413420Z" level=info msg="shim disconnected" id=2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa namespace=k8s.io Feb 13 19:06:23.220693 containerd[1566]: time="2025-02-13T19:06:23.220548846Z" level=warning msg="cleaning up after shim disconnected" id=2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa namespace=k8s.io Feb 13 19:06:23.220693 containerd[1566]: time="2025-02-13T19:06:23.220572605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:23.225298 containerd[1566]: time="2025-02-13T19:06:23.225201827Z" level=info msg="TearDown network for sandbox \"2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81\" successfully" Feb 13 19:06:23.225298 containerd[1566]: time="2025-02-13T19:06:23.225295146Z" level=info msg="StopPodSandbox for \"2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81\" returns successfully" Feb 13 19:06:23.235084 containerd[1566]: time="2025-02-13T19:06:23.235035942Z" level=info msg="TearDown network for sandbox \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" successfully" Feb 13 19:06:23.235084 containerd[1566]: time="2025-02-13T19:06:23.235071262Z" level=info msg="StopPodSandbox for \"2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa\" returns successfully" Feb 13 19:06:23.420582 kubelet[2797]: I0213 19:06:23.420448 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9rj9\" (UniqueName: \"kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-kube-api-access-t9rj9\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.420582 kubelet[2797]: I0213 19:06:23.420499 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-net\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.420582 kubelet[2797]: I0213 19:06:23.420522 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-kernel\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.420582 kubelet[2797]: I0213 19:06:23.420537 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-bpf-maps\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.420582 kubelet[2797]: I0213 19:06:23.420552 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-cgroup\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.420582 kubelet[2797]: I0213 19:06:23.420567 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cni-path\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421070 kubelet[2797]: I0213 19:06:23.420583 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-xtables-lock\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421070 kubelet[2797]: I0213 19:06:23.420600 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tmfx\" (UniqueName: \"kubernetes.io/projected/a5db5df6-61e9-4793-a2cc-3e041194bdf9-kube-api-access-5tmfx\") pod \"a5db5df6-61e9-4793-a2cc-3e041194bdf9\" (UID: \"a5db5df6-61e9-4793-a2cc-3e041194bdf9\") " Feb 13 19:06:23.421070 kubelet[2797]: I0213 19:06:23.420616 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-hubble-tls\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421070 kubelet[2797]: I0213 19:06:23.420631 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-hostproc\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421070 kubelet[2797]: I0213 19:06:23.420647 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-run\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421070 kubelet[2797]: I0213 19:06:23.420665 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5db5df6-61e9-4793-a2cc-3e041194bdf9-cilium-config-path\") pod \"a5db5df6-61e9-4793-a2cc-3e041194bdf9\" (UID: \"a5db5df6-61e9-4793-a2cc-3e041194bdf9\") " Feb 13 19:06:23.421191 kubelet[2797]: I0213 19:06:23.420680 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-config-path\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421191 kubelet[2797]: I0213 19:06:23.420719 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ba56aad-2afe-43b8-878f-a6b87a22d540-clustermesh-secrets\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421191 kubelet[2797]: I0213 19:06:23.420734 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-etc-cni-netd\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.421191 kubelet[2797]: I0213 19:06:23.420748 2797 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-lib-modules\") pod \"9ba56aad-2afe-43b8-878f-a6b87a22d540\" (UID: \"9ba56aad-2afe-43b8-878f-a6b87a22d540\") " Feb 13 19:06:23.422191 kubelet[2797]: I0213 19:06:23.421410 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.422191 kubelet[2797]: I0213 19:06:23.421453 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.422191 kubelet[2797]: I0213 19:06:23.421815 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.422191 kubelet[2797]: I0213 19:06:23.421846 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.422191 kubelet[2797]: I0213 19:06:23.421871 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.422657 kubelet[2797]: I0213 19:06:23.422629 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.422691 kubelet[2797]: I0213 19:06:23.422662 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cni-path" (OuterVolumeSpecName: "cni-path") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.427812 kubelet[2797]: I0213 19:06:23.427747 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:06:23.427916 kubelet[2797]: I0213 19:06:23.427860 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba56aad-2afe-43b8-878f-a6b87a22d540-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:06:23.428012 kubelet[2797]: I0213 19:06:23.427979 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.428058 kubelet[2797]: I0213 19:06:23.428022 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-hostproc" (OuterVolumeSpecName: "hostproc") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.428058 kubelet[2797]: I0213 19:06:23.428039 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:23.428162 kubelet[2797]: I0213 19:06:23.428138 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5db5df6-61e9-4793-a2cc-3e041194bdf9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5db5df6-61e9-4793-a2cc-3e041194bdf9" (UID: "a5db5df6-61e9-4793-a2cc-3e041194bdf9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:06:23.429019 kubelet[2797]: I0213 19:06:23.428995 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-kube-api-access-t9rj9" (OuterVolumeSpecName: "kube-api-access-t9rj9") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "kube-api-access-t9rj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:06:23.429933 kubelet[2797]: I0213 19:06:23.429913 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9ba56aad-2afe-43b8-878f-a6b87a22d540" (UID: "9ba56aad-2afe-43b8-878f-a6b87a22d540"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:06:23.430024 kubelet[2797]: I0213 19:06:23.429948 2797 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5db5df6-61e9-4793-a2cc-3e041194bdf9-kube-api-access-5tmfx" (OuterVolumeSpecName: "kube-api-access-5tmfx") pod "a5db5df6-61e9-4793-a2cc-3e041194bdf9" (UID: "a5db5df6-61e9-4793-a2cc-3e041194bdf9"). InnerVolumeSpecName "kube-api-access-5tmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:06:23.521417 kubelet[2797]: I0213 19:06:23.521374 2797 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521417 kubelet[2797]: I0213 19:06:23.521405 2797 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ba56aad-2afe-43b8-878f-a6b87a22d540-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521417 kubelet[2797]: I0213 19:06:23.521414 2797 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521417 kubelet[2797]: I0213 19:06:23.521424 2797 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521417 kubelet[2797]: I0213 19:06:23.521432 2797 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t9rj9\" (UniqueName: \"kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-kube-api-access-t9rj9\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521440 2797 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521447 2797 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521455 2797 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521463 2797 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521471 2797 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521478 2797 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521486 2797 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5tmfx\" (UniqueName: \"kubernetes.io/projected/a5db5df6-61e9-4793-a2cc-3e041194bdf9-kube-api-access-5tmfx\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521644 kubelet[2797]: I0213 19:06:23.521511 2797 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ba56aad-2afe-43b8-878f-a6b87a22d540-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521830 kubelet[2797]: I0213 19:06:23.521519 2797 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521830 kubelet[2797]: I0213 19:06:23.521529 2797 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ba56aad-2afe-43b8-878f-a6b87a22d540-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:23.521830 kubelet[2797]: I0213 19:06:23.521536 2797 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5db5df6-61e9-4793-a2cc-3e041194bdf9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:06:24.094572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e86bc0c779414c92c26af52feaff5e4bd2d528257dafae29164313300422e81-rootfs.mount: Deactivated successfully. Feb 13 19:06:24.094733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d6c3bab23fa4c0f3bb3d0560019bad1382943edb1615a022b9844e4d93af3fa-rootfs.mount: Deactivated successfully. Feb 13 19:06:24.094831 systemd[1]: var-lib-kubelet-pods-a5db5df6\x2d61e9\x2d4793\x2da2cc\x2d3e041194bdf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5tmfx.mount: Deactivated successfully. Feb 13 19:06:24.094935 systemd[1]: var-lib-kubelet-pods-9ba56aad\x2d2afe\x2d43b8\x2d878f\x2da6b87a22d540-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt9rj9.mount: Deactivated successfully. Feb 13 19:06:24.095014 systemd[1]: var-lib-kubelet-pods-9ba56aad\x2d2afe\x2d43b8\x2d878f\x2da6b87a22d540-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:06:24.095095 systemd[1]: var-lib-kubelet-pods-9ba56aad\x2d2afe\x2d43b8\x2d878f\x2da6b87a22d540-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:06:24.179896 kubelet[2797]: I0213 19:06:24.179865 2797 scope.go:117] "RemoveContainer" containerID="b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002" Feb 13 19:06:24.183036 containerd[1566]: time="2025-02-13T19:06:24.182683438Z" level=info msg="RemoveContainer for \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\"" Feb 13 19:06:24.190074 containerd[1566]: time="2025-02-13T19:06:24.189638156Z" level=info msg="RemoveContainer for \"b2c3ad588f12c6357533df16cacef6839afb5d7dd588481fdc13278107131002\" returns successfully" Feb 13 19:06:24.191207 kubelet[2797]: I0213 19:06:24.189967 2797 scope.go:117] "RemoveContainer" containerID="7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68" Feb 13 19:06:24.192414 containerd[1566]: time="2025-02-13T19:06:24.192367124Z" level=info msg="RemoveContainer for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\"" Feb 13 19:06:24.195154 containerd[1566]: time="2025-02-13T19:06:24.195119052Z" level=info msg="RemoveContainer for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" returns successfully" Feb 13 19:06:24.195309 kubelet[2797]: I0213 19:06:24.195276 2797 scope.go:117] "RemoveContainer" containerID="418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28" Feb 13 19:06:24.196612 containerd[1566]: time="2025-02-13T19:06:24.196591035Z" level=info msg="RemoveContainer for \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\"" Feb 13 19:06:24.198946 containerd[1566]: time="2025-02-13T19:06:24.198918527Z" level=info msg="RemoveContainer for \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\" returns successfully" Feb 13 19:06:24.199478 kubelet[2797]: I0213 19:06:24.199173 2797 scope.go:117] "RemoveContainer" containerID="9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9" Feb 13 19:06:24.200732 containerd[1566]: time="2025-02-13T19:06:24.200634107Z" level=info msg="RemoveContainer for \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\"" Feb 13 19:06:24.209205 containerd[1566]: time="2025-02-13T19:06:24.209057648Z" level=info msg="RemoveContainer for \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\" returns successfully" Feb 13 19:06:24.209556 kubelet[2797]: I0213 19:06:24.209360 2797 scope.go:117] "RemoveContainer" containerID="9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca" Feb 13 19:06:24.211116 containerd[1566]: time="2025-02-13T19:06:24.211086904Z" level=info msg="RemoveContainer for \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\"" Feb 13 19:06:24.217395 containerd[1566]: time="2025-02-13T19:06:24.217357471Z" level=info msg="RemoveContainer for \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\" returns successfully" Feb 13 19:06:24.217746 kubelet[2797]: I0213 19:06:24.217720 2797 scope.go:117] "RemoveContainer" containerID="9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81" Feb 13 19:06:24.218898 containerd[1566]: time="2025-02-13T19:06:24.218873933Z" level=info msg="RemoveContainer for \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\"" Feb 13 19:06:24.221037 containerd[1566]: time="2025-02-13T19:06:24.221005788Z" level=info msg="RemoveContainer for \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\" returns successfully" Feb 13 19:06:24.221207 kubelet[2797]: I0213 19:06:24.221181 2797 scope.go:117] "RemoveContainer" containerID="7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68" Feb 13 19:06:24.221449 containerd[1566]: time="2025-02-13T19:06:24.221406103Z" level=error msg="ContainerStatus for \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\": not found" Feb 13 19:06:24.224531 kubelet[2797]: E0213 19:06:24.224491 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\": not found" containerID="7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68" Feb 13 19:06:24.224637 kubelet[2797]: I0213 19:06:24.224537 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68"} err="failed to get container status \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e67c18dc67cff3609a5cdec43c596da7fc5346e7a7d576c64698e43603a6d68\": not found" Feb 13 19:06:24.224678 kubelet[2797]: I0213 19:06:24.224634 2797 scope.go:117] "RemoveContainer" containerID="418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28" Feb 13 19:06:24.224941 containerd[1566]: time="2025-02-13T19:06:24.224910022Z" level=error msg="ContainerStatus for \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\": not found" Feb 13 19:06:24.225040 kubelet[2797]: E0213 19:06:24.225021 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\": not found" containerID="418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28" Feb 13 19:06:24.225081 kubelet[2797]: I0213 19:06:24.225046 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28"} err="failed to get container status \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\": rpc error: code = NotFound desc = an error occurred when try to find container \"418142d57797fe813c8624ee8d5483e75e0ca4644d09f2745a6e70f12f8e7a28\": not found" Feb 13 19:06:24.225081 kubelet[2797]: I0213 19:06:24.225060 2797 scope.go:117] "RemoveContainer" containerID="9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9" Feb 13 19:06:24.225247 containerd[1566]: time="2025-02-13T19:06:24.225212058Z" level=error msg="ContainerStatus for \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\": not found" Feb 13 19:06:24.225337 kubelet[2797]: E0213 19:06:24.225316 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\": not found" containerID="9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9" Feb 13 19:06:24.225377 kubelet[2797]: I0213 19:06:24.225342 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9"} err="failed to get container status \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"9669806ba962bf1adbbd716fee4388e33f5fd241226b6e955db8f280a7f989a9\": not found" Feb 13 19:06:24.225377 kubelet[2797]: I0213 19:06:24.225359 2797 scope.go:117] "RemoveContainer" containerID="9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca" Feb 13 19:06:24.225567 containerd[1566]: time="2025-02-13T19:06:24.225531655Z" level=error msg="ContainerStatus for \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\": not found" Feb 13 19:06:24.225652 kubelet[2797]: E0213 19:06:24.225633 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\": not found" containerID="9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca" Feb 13 19:06:24.225680 kubelet[2797]: I0213 19:06:24.225654 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca"} err="failed to get container status \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"9227dd94bbb5bbaec9bef68e6f6d9dae2f2cdfff32511fd76fae5bf8e97364ca\": not found" Feb 13 19:06:24.225680 kubelet[2797]: I0213 19:06:24.225668 2797 scope.go:117] "RemoveContainer" containerID="9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81" Feb 13 19:06:24.225903 containerd[1566]: time="2025-02-13T19:06:24.225873091Z" level=error msg="ContainerStatus for \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\": not found" Feb 13 19:06:24.225990 kubelet[2797]: E0213 19:06:24.225970 2797 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\": not found" containerID="9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81" Feb 13 19:06:24.226024 kubelet[2797]: I0213 19:06:24.225993 2797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81"} err="failed to get container status \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bc037afc748805cc5d7168776d0e86bd477968627477a80e0637b8628e77a81\": not found" Feb 13 19:06:24.975603 kubelet[2797]: I0213 19:06:24.975567 2797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" path="/var/lib/kubelet/pods/9ba56aad-2afe-43b8-878f-a6b87a22d540/volumes" Feb 13 19:06:24.976137 kubelet[2797]: I0213 19:06:24.976119 2797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5db5df6-61e9-4793-a2cc-3e041194bdf9" path="/var/lib/kubelet/pods/a5db5df6-61e9-4793-a2cc-3e041194bdf9/volumes" Feb 13 19:06:25.036661 sshd[4430]: Connection closed by 10.0.0.1 port 49942 Feb 13 19:06:25.036341 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:25.040712 kubelet[2797]: E0213 19:06:25.040671 2797 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:06:25.052076 systemd[1]: Started sshd@22-10.0.0.61:22-10.0.0.1:60912.service - OpenSSH per-connection server daemon (10.0.0.1:60912). Feb 13 19:06:25.052536 systemd[1]: sshd@21-10.0.0.61:22-10.0.0.1:49942.service: Deactivated successfully. Feb 13 19:06:25.055015 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:06:25.056919 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:06:25.057864 systemd-logind[1550]: Removed session 22. Feb 13 19:06:25.094653 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 60912 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:25.096012 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:25.099901 systemd-logind[1550]: New session 23 of user core. Feb 13 19:06:25.110029 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:06:25.987489 sshd[4599]: Connection closed by 10.0.0.1 port 60912 Feb 13 19:06:25.988296 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:25.997746 systemd[1]: Started sshd@23-10.0.0.61:22-10.0.0.1:60928.service - OpenSSH per-connection server daemon (10.0.0.1:60928). Feb 13 19:06:25.998231 systemd[1]: sshd@22-10.0.0.61:22-10.0.0.1:60912.service: Deactivated successfully. Feb 13 19:06:26.004991 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:06:26.006368 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:06:26.012771 systemd-logind[1550]: Removed session 23. Feb 13 19:06:26.019148 kubelet[2797]: I0213 19:06:26.019055 2797 topology_manager.go:215] "Topology Admit Handler" podUID="c39e23f3-3878-44d6-b42f-22bc0e7f85de" podNamespace="kube-system" podName="cilium-hjbfn" Feb 13 19:06:26.020167 kubelet[2797]: E0213 19:06:26.019244 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" containerName="mount-cgroup" Feb 13 19:06:26.020167 kubelet[2797]: E0213 19:06:26.019255 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" containerName="apply-sysctl-overwrites" Feb 13 19:06:26.020167 kubelet[2797]: E0213 19:06:26.019262 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" containerName="mount-bpf-fs" Feb 13 19:06:26.020167 kubelet[2797]: E0213 19:06:26.019269 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5db5df6-61e9-4793-a2cc-3e041194bdf9" containerName="cilium-operator" Feb 13 19:06:26.020167 kubelet[2797]: E0213 19:06:26.019277 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" containerName="cilium-agent" Feb 13 19:06:26.020167 kubelet[2797]: E0213 19:06:26.019284 2797 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" containerName="clean-cilium-state" Feb 13 19:06:26.020167 kubelet[2797]: I0213 19:06:26.019306 2797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5db5df6-61e9-4793-a2cc-3e041194bdf9" containerName="cilium-operator" Feb 13 19:06:26.020167 kubelet[2797]: I0213 19:06:26.019320 2797 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba56aad-2afe-43b8-878f-a6b87a22d540" containerName="cilium-agent" Feb 13 19:06:26.058800 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 60928 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:26.060233 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:26.066883 systemd-logind[1550]: New session 24 of user core. Feb 13 19:06:26.073882 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:06:26.137290 kubelet[2797]: I0213 19:06:26.137179 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-hostproc\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.137290 kubelet[2797]: I0213 19:06:26.137231 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-etc-cni-netd\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.137290 kubelet[2797]: I0213 19:06:26.137253 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-lib-modules\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.137290 kubelet[2797]: I0213 19:06:26.137270 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-xtables-lock\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.137290 kubelet[2797]: I0213 19:06:26.137285 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c39e23f3-3878-44d6-b42f-22bc0e7f85de-hubble-tls\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.137290 kubelet[2797]: I0213 19:06:26.137302 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c39e23f3-3878-44d6-b42f-22bc0e7f85de-clustermesh-secrets\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139018 kubelet[2797]: I0213 19:06:26.137318 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-host-proc-sys-kernel\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139018 kubelet[2797]: I0213 19:06:26.137337 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c39e23f3-3878-44d6-b42f-22bc0e7f85de-cilium-ipsec-secrets\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139018 kubelet[2797]: I0213 19:06:26.137351 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-host-proc-sys-net\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139018 kubelet[2797]: I0213 19:06:26.137366 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-cilium-run\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139018 kubelet[2797]: I0213 19:06:26.137385 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-cilium-cgroup\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139018 kubelet[2797]: I0213 19:06:26.137400 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-cni-path\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139140 kubelet[2797]: I0213 19:06:26.137416 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c39e23f3-3878-44d6-b42f-22bc0e7f85de-bpf-maps\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139140 kubelet[2797]: I0213 19:06:26.137435 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c39e23f3-3878-44d6-b42f-22bc0e7f85de-cilium-config-path\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.139140 kubelet[2797]: I0213 19:06:26.137453 2797 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgrr\" (UniqueName: \"kubernetes.io/projected/c39e23f3-3878-44d6-b42f-22bc0e7f85de-kube-api-access-wxgrr\") pod \"cilium-hjbfn\" (UID: \"c39e23f3-3878-44d6-b42f-22bc0e7f85de\") " pod="kube-system/cilium-hjbfn" Feb 13 19:06:26.144718 sshd[4612]: Connection closed by 10.0.0.1 port 60928 Feb 13 19:06:26.145824 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:26.164297 systemd[1]: Started sshd@24-10.0.0.61:22-10.0.0.1:60936.service - OpenSSH per-connection server daemon (10.0.0.1:60936). Feb 13 19:06:26.165320 systemd[1]: sshd@23-10.0.0.61:22-10.0.0.1:60928.service: Deactivated successfully. Feb 13 19:06:26.167523 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:06:26.169793 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:06:26.171271 systemd-logind[1550]: Removed session 24. Feb 13 19:06:26.207555 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 60936 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:06:26.209028 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:26.214554 systemd-logind[1550]: New session 25 of user core. Feb 13 19:06:26.225089 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:06:26.328365 kubelet[2797]: E0213 19:06:26.328319 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:26.329213 containerd[1566]: time="2025-02-13T19:06:26.329158158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjbfn,Uid:c39e23f3-3878-44d6-b42f-22bc0e7f85de,Namespace:kube-system,Attempt:0,}" Feb 13 19:06:26.359481 containerd[1566]: time="2025-02-13T19:06:26.359227536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:06:26.359481 containerd[1566]: time="2025-02-13T19:06:26.359296415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:06:26.359481 containerd[1566]: time="2025-02-13T19:06:26.359313415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:06:26.359481 containerd[1566]: time="2025-02-13T19:06:26.359409894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:06:26.396713 containerd[1566]: time="2025-02-13T19:06:26.396676720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjbfn,Uid:c39e23f3-3878-44d6-b42f-22bc0e7f85de,Namespace:kube-system,Attempt:0,} returns sandbox id \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\"" Feb 13 19:06:26.397513 kubelet[2797]: E0213 19:06:26.397492 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:26.399651 containerd[1566]: time="2025-02-13T19:06:26.399516851Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:06:26.412313 containerd[1566]: time="2025-02-13T19:06:26.412254243Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c431f0d538137cd8800312c744f4a860512f4348a9f33b0e06dbf9fff06fcfbd\"" Feb 13 19:06:26.413320 containerd[1566]: time="2025-02-13T19:06:26.412740838Z" level=info msg="StartContainer for \"c431f0d538137cd8800312c744f4a860512f4348a9f33b0e06dbf9fff06fcfbd\"" Feb 13 19:06:26.462059 containerd[1566]: time="2025-02-13T19:06:26.461996864Z" level=info msg="StartContainer for \"c431f0d538137cd8800312c744f4a860512f4348a9f33b0e06dbf9fff06fcfbd\" returns successfully" Feb 13 19:06:26.504946 containerd[1566]: time="2025-02-13T19:06:26.504850713Z" level=info msg="shim disconnected" id=c431f0d538137cd8800312c744f4a860512f4348a9f33b0e06dbf9fff06fcfbd namespace=k8s.io Feb 13 19:06:26.504946 containerd[1566]: time="2025-02-13T19:06:26.504906993Z" level=warning msg="cleaning up after shim disconnected" id=c431f0d538137cd8800312c744f4a860512f4348a9f33b0e06dbf9fff06fcfbd namespace=k8s.io Feb 13 19:06:26.504946 containerd[1566]: time="2025-02-13T19:06:26.504916432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:26.716359 kubelet[2797]: I0213 19:06:26.714996 2797 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:06:26Z","lastTransitionTime":"2025-02-13T19:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:06:27.195989 kubelet[2797]: E0213 19:06:27.195915 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:27.199225 containerd[1566]: time="2025-02-13T19:06:27.199131140Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:06:27.209049 containerd[1566]: time="2025-02-13T19:06:27.208999289Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c75d71a096022eb6f363de1bba6157a3fec5abea8221b37f665c6976cd057f8a\"" Feb 13 19:06:27.210935 containerd[1566]: time="2025-02-13T19:06:27.209615764Z" level=info msg="StartContainer for \"c75d71a096022eb6f363de1bba6157a3fec5abea8221b37f665c6976cd057f8a\"" Feb 13 19:06:27.274974 containerd[1566]: time="2025-02-13T19:06:27.274846121Z" level=info msg="StartContainer for \"c75d71a096022eb6f363de1bba6157a3fec5abea8221b37f665c6976cd057f8a\" returns successfully" Feb 13 19:06:27.288745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c75d71a096022eb6f363de1bba6157a3fec5abea8221b37f665c6976cd057f8a-rootfs.mount: Deactivated successfully. Feb 13 19:06:27.291982 containerd[1566]: time="2025-02-13T19:06:27.291922524Z" level=info msg="shim disconnected" id=c75d71a096022eb6f363de1bba6157a3fec5abea8221b37f665c6976cd057f8a namespace=k8s.io Feb 13 19:06:27.291982 containerd[1566]: time="2025-02-13T19:06:27.291979403Z" level=warning msg="cleaning up after shim disconnected" id=c75d71a096022eb6f363de1bba6157a3fec5abea8221b37f665c6976cd057f8a namespace=k8s.io Feb 13 19:06:27.291982 containerd[1566]: time="2025-02-13T19:06:27.291987803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:28.200275 kubelet[2797]: E0213 19:06:28.200233 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:28.203879 containerd[1566]: time="2025-02-13T19:06:28.203821186Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:06:28.224091 containerd[1566]: time="2025-02-13T19:06:28.223956256Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5083e45e8f581cbe9c98e0a809c28a9990234e67b32694eca620764250d495c\"" Feb 13 19:06:28.224575 containerd[1566]: time="2025-02-13T19:06:28.224459052Z" level=info msg="StartContainer for \"d5083e45e8f581cbe9c98e0a809c28a9990234e67b32694eca620764250d495c\"" Feb 13 19:06:28.287121 containerd[1566]: time="2025-02-13T19:06:28.287075403Z" level=info msg="StartContainer for \"d5083e45e8f581cbe9c98e0a809c28a9990234e67b32694eca620764250d495c\" returns successfully" Feb 13 19:06:28.307617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5083e45e8f581cbe9c98e0a809c28a9990234e67b32694eca620764250d495c-rootfs.mount: Deactivated successfully. Feb 13 19:06:28.311678 containerd[1566]: time="2025-02-13T19:06:28.311619076Z" level=info msg="shim disconnected" id=d5083e45e8f581cbe9c98e0a809c28a9990234e67b32694eca620764250d495c namespace=k8s.io Feb 13 19:06:28.311678 containerd[1566]: time="2025-02-13T19:06:28.311677116Z" level=warning msg="cleaning up after shim disconnected" id=d5083e45e8f581cbe9c98e0a809c28a9990234e67b32694eca620764250d495c namespace=k8s.io Feb 13 19:06:28.311678 containerd[1566]: time="2025-02-13T19:06:28.311686875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:29.203776 kubelet[2797]: E0213 19:06:29.203733 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:29.208797 containerd[1566]: time="2025-02-13T19:06:29.207835428Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:06:29.441434 containerd[1566]: time="2025-02-13T19:06:29.441380114Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"62d639c083b100f6b2822c19e4f9ed34d3afee94d62e99904e34842b5909f0b0\"" Feb 13 19:06:29.441974 containerd[1566]: time="2025-02-13T19:06:29.441936190Z" level=info msg="StartContainer for \"62d639c083b100f6b2822c19e4f9ed34d3afee94d62e99904e34842b5909f0b0\"" Feb 13 19:06:29.502713 containerd[1566]: time="2025-02-13T19:06:29.502587965Z" level=info msg="StartContainer for \"62d639c083b100f6b2822c19e4f9ed34d3afee94d62e99904e34842b5909f0b0\" returns successfully" Feb 13 19:06:29.520720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62d639c083b100f6b2822c19e4f9ed34d3afee94d62e99904e34842b5909f0b0-rootfs.mount: Deactivated successfully. Feb 13 19:06:29.526833 containerd[1566]: time="2025-02-13T19:06:29.526741459Z" level=info msg="shim disconnected" id=62d639c083b100f6b2822c19e4f9ed34d3afee94d62e99904e34842b5909f0b0 namespace=k8s.io Feb 13 19:06:29.526833 containerd[1566]: time="2025-02-13T19:06:29.526828618Z" level=warning msg="cleaning up after shim disconnected" id=62d639c083b100f6b2822c19e4f9ed34d3afee94d62e99904e34842b5909f0b0 namespace=k8s.io Feb 13 19:06:29.526833 containerd[1566]: time="2025-02-13T19:06:29.526837738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:29.975588 kubelet[2797]: E0213 19:06:29.975446 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:30.042275 kubelet[2797]: E0213 19:06:30.042209 2797 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:06:30.209049 kubelet[2797]: E0213 19:06:30.207047 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:30.211692 containerd[1566]: time="2025-02-13T19:06:30.211615716Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:06:30.238066 containerd[1566]: time="2025-02-13T19:06:30.237941213Z" level=info msg="CreateContainer within sandbox \"255efa18dfd9789e6eea06ac7514d60a6286cbe890edfc1eecc5570fa1c5a9d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"456e1c454041230777f703e8a78d8721064c46281343a5a4f6947ece722dac7e\"" Feb 13 19:06:30.238889 containerd[1566]: time="2025-02-13T19:06:30.238848727Z" level=info msg="StartContainer for \"456e1c454041230777f703e8a78d8721064c46281343a5a4f6947ece722dac7e\"" Feb 13 19:06:30.290804 containerd[1566]: time="2025-02-13T19:06:30.290645248Z" level=info msg="StartContainer for \"456e1c454041230777f703e8a78d8721064c46281343a5a4f6947ece722dac7e\" returns successfully" Feb 13 19:06:30.576781 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:06:31.211720 kubelet[2797]: E0213 19:06:31.211647 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:31.226785 kubelet[2797]: I0213 19:06:31.226336 2797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hjbfn" podStartSLOduration=6.226318358 podStartE2EDuration="6.226318358s" podCreationTimestamp="2025-02-13 19:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:06:31.226039919 +0000 UTC m=+76.332225778" watchObservedRunningTime="2025-02-13 19:06:31.226318358 +0000 UTC m=+76.332504217" Feb 13 19:06:32.329820 kubelet[2797]: E0213 19:06:32.329783 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:33.490198 systemd-networkd[1230]: lxc_health: Link UP Feb 13 19:06:33.500244 systemd-networkd[1230]: lxc_health: Gained carrier Feb 13 19:06:34.332427 kubelet[2797]: E0213 19:06:34.330776 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:35.216987 systemd-networkd[1230]: lxc_health: Gained IPv6LL Feb 13 19:06:35.220872 kubelet[2797]: E0213 19:06:35.220595 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:35.973993 kubelet[2797]: E0213 19:06:35.973920 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:36.225637 kubelet[2797]: E0213 19:06:36.225126 2797 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:06:36.941713 kubelet[2797]: E0213 19:06:36.941673 2797 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:47220->127.0.0.1:41715: read tcp 127.0.0.1:47220->127.0.0.1:41715: read: connection reset by peer Feb 13 19:06:39.052728 sshd[4621]: Connection closed by 10.0.0.1 port 60936 Feb 13 19:06:39.053275 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:39.056581 systemd[1]: sshd@24-10.0.0.61:22-10.0.0.1:60936.service: Deactivated successfully. Feb 13 19:06:39.059153 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:06:39.060051 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:06:39.061080 systemd-logind[1550]: Removed session 25.