Jan 13 21:15:31.898206 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:15:31.898227 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:15:31.898237 kernel: KASLR enabled Jan 13 21:15:31.898243 kernel: efi: EFI v2.7 by EDK II Jan 13 21:15:31.898249 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:15:31.898254 kernel: random: crng init done Jan 13 21:15:31.898262 kernel: ACPI: Early table checksum verification disabled Jan 13 21:15:31.898268 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:15:31.898274 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:15:31.898282 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898288 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898294 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898300 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898306 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898314 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898321 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898328 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898335 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:15:31.898341 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:15:31.898347 kernel: NUMA: Failed to initialise from firmware Jan 13 21:15:31.898354 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:15:31.898360 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Jan 13 21:15:31.898367 kernel: Zone ranges: Jan 13 21:15:31.898373 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:15:31.898379 kernel: DMA32 empty Jan 13 21:15:31.898387 kernel: Normal empty Jan 13 21:15:31.898393 kernel: Movable zone start for each node Jan 13 21:15:31.898400 kernel: Early memory node ranges Jan 13 21:15:31.898406 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:15:31.898413 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:15:31.898419 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:15:31.898425 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:15:31.898432 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:15:31.898438 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:15:31.898444 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:15:31.898451 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:15:31.898457 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:15:31.898465 kernel: psci: probing for conduit method from ACPI. Jan 13 21:15:31.898472 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:15:31.898478 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:15:31.898487 kernel: psci: Trusted OS migration not required Jan 13 21:15:31.898494 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:15:31.898501 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:15:31.898510 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:15:31.898517 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:15:31.898524 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:15:31.898530 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:15:31.898537 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:15:31.898544 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:15:31.898551 kernel: CPU features: detected: Spectre-v4 Jan 13 21:15:31.898557 kernel: CPU features: detected: Spectre-BHB Jan 13 21:15:31.898564 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:15:31.898571 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:15:31.898579 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:15:31.898586 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:15:31.898593 kernel: alternatives: applying boot alternatives Jan 13 21:15:31.898600 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:15:31.898608 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:15:31.898614 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:15:31.898621 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:15:31.898628 kernel: Fallback order for Node 0: 0 Jan 13 21:15:31.898635 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:15:31.898642 kernel: Policy zone: DMA Jan 13 21:15:31.898649 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:15:31.898701 kernel: software IO TLB: area num 4. Jan 13 21:15:31.898708 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:15:31.898716 kernel: Memory: 2386536K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185752K reserved, 0K cma-reserved) Jan 13 21:15:31.898723 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:15:31.898730 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:15:31.898738 kernel: rcu: RCU event tracing is enabled. Jan 13 21:15:31.898745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:15:31.898752 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:15:31.898758 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:15:31.898765 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:15:31.898772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:15:31.898779 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:15:31.898787 kernel: GICv3: 256 SPIs implemented Jan 13 21:15:31.898794 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:15:31.898801 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:15:31.898807 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:15:31.898814 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:15:31.898821 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:15:31.898828 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:15:31.898835 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:15:31.898841 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:15:31.898848 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:15:31.898855 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:15:31.898863 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:15:31.898871 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:15:31.898878 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:15:31.898885 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:15:31.898891 kernel: arm-pv: using stolen time PV Jan 13 21:15:31.898898 kernel: Console: colour dummy device 80x25 Jan 13 21:15:31.898905 kernel: ACPI: Core revision 20230628 Jan 13 21:15:31.898912 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:15:31.898919 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:15:31.898926 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:15:31.898934 kernel: landlock: Up and running. Jan 13 21:15:31.898941 kernel: SELinux: Initializing. Jan 13 21:15:31.898948 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:15:31.898955 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:15:31.898962 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:15:31.898969 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:15:31.898976 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:15:31.898983 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:15:31.898990 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:15:31.898998 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:15:31.899005 kernel: Remapping and enabling EFI services. Jan 13 21:15:31.899012 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:15:31.899019 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:15:31.899026 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:15:31.899033 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:15:31.899040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:15:31.899047 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:15:31.899054 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:15:31.899061 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:15:31.899070 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:15:31.899077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:15:31.899089 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:15:31.899097 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:15:31.899104 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:15:31.899112 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:15:31.899119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:15:31.899126 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:15:31.899134 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:15:31.899143 kernel: SMP: Total of 4 processors activated. Jan 13 21:15:31.899151 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:15:31.899158 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:15:31.899172 kernel: CPU features: detected: Common not Private translations Jan 13 21:15:31.899180 kernel: CPU features: detected: CRC32 instructions Jan 13 21:15:31.899187 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:15:31.899195 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:15:31.899202 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:15:31.899212 kernel: CPU features: detected: Privileged Access Never Jan 13 21:15:31.899219 kernel: CPU features: detected: RAS Extension Support Jan 13 21:15:31.899227 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:15:31.899234 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:15:31.899242 kernel: alternatives: applying system-wide alternatives Jan 13 21:15:31.899249 kernel: devtmpfs: initialized Jan 13 21:15:31.899256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:15:31.899263 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:15:31.899271 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:15:31.899280 kernel: SMBIOS 3.0.0 present. Jan 13 21:15:31.899287 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:15:31.899294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:15:31.899302 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:15:31.899309 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:15:31.899317 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:15:31.899324 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:15:31.899331 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jan 13 21:15:31.899338 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:15:31.899347 kernel: cpuidle: using governor menu Jan 13 21:15:31.899354 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:15:31.899362 kernel: ASID allocator initialised with 32768 entries Jan 13 21:15:31.899369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:15:31.899377 kernel: Serial: AMBA PL011 UART driver Jan 13 21:15:31.899384 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:15:31.899391 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:15:31.899399 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:15:31.899406 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:15:31.899415 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:15:31.899422 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:15:31.899430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:15:31.899437 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:15:31.899445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:15:31.899452 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:15:31.899459 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:15:31.899466 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:15:31.899474 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:15:31.899482 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:15:31.899490 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:15:31.899497 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:15:31.899504 kernel: ACPI: Interpreter enabled Jan 13 21:15:31.899512 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:15:31.899519 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:15:31.899527 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:15:31.899534 kernel: printk: console [ttyAMA0] enabled Jan 13 21:15:31.899541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:15:31.901484 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:15:31.901563 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:15:31.901628 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:15:31.901705 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:15:31.901770 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:15:31.901781 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:15:31.901788 kernel: PCI host bridge to bus 0000:00 Jan 13 21:15:31.901863 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:15:31.901924 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:15:31.901982 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:15:31.902039 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:15:31.902120 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:15:31.902210 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:15:31.902285 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:15:31.902351 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:15:31.902417 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:15:31.902483 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:15:31.902549 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:15:31.902614 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:15:31.902692 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:15:31.902755 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:15:31.902834 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:15:31.902845 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:15:31.902853 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:15:31.902860 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:15:31.902868 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:15:31.902875 kernel: iommu: Default domain type: Translated Jan 13 21:15:31.902883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:15:31.902891 kernel: efivars: Registered efivars operations Jan 13 21:15:31.902901 kernel: vgaarb: loaded Jan 13 21:15:31.902909 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:15:31.902917 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:15:31.902924 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:15:31.902932 kernel: pnp: PnP ACPI init Jan 13 21:15:31.903010 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:15:31.903021 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:15:31.903028 kernel: NET: Registered PF_INET protocol family Jan 13 21:15:31.903038 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:15:31.903046 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:15:31.903053 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:15:31.903061 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:15:31.903068 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:15:31.903075 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:15:31.903083 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:15:31.903090 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:15:31.903098 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:15:31.903106 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:15:31.903114 kernel: kvm [1]: HYP mode not available Jan 13 21:15:31.903121 kernel: Initialise system trusted keyrings Jan 13 21:15:31.903128 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:15:31.903136 kernel: Key type asymmetric registered Jan 13 21:15:31.903143 kernel: Asymmetric key parser 'x509' registered Jan 13 21:15:31.903150 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:15:31.903158 kernel: io scheduler mq-deadline registered Jan 13 21:15:31.903173 kernel: io scheduler kyber registered Jan 13 21:15:31.903184 kernel: io scheduler bfq registered Jan 13 21:15:31.903191 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:15:31.903199 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:15:31.903207 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:15:31.903282 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:15:31.903292 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:15:31.903300 kernel: thunder_xcv, ver 1.0 Jan 13 21:15:31.903307 kernel: thunder_bgx, ver 1.0 Jan 13 21:15:31.903315 kernel: nicpf, ver 1.0 Jan 13 21:15:31.903324 kernel: nicvf, ver 1.0 Jan 13 21:15:31.903400 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:15:31.903464 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:15:31 UTC (1736802931) Jan 13 21:15:31.903474 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:15:31.903482 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:15:31.903490 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:15:31.903497 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:15:31.903505 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:15:31.903514 kernel: Segment Routing with IPv6 Jan 13 21:15:31.903522 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:15:31.903529 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:15:31.903536 kernel: Key type dns_resolver registered Jan 13 21:15:31.903544 kernel: registered taskstats version 1 Jan 13 21:15:31.903551 kernel: Loading compiled-in X.509 certificates Jan 13 21:15:31.903559 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:15:31.903566 kernel: Key type .fscrypt registered Jan 13 21:15:31.903573 kernel: Key type fscrypt-provisioning registered Jan 13 21:15:31.903582 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:15:31.903590 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:15:31.903597 kernel: ima: No architecture policies found Jan 13 21:15:31.903605 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:15:31.903612 kernel: clk: Disabling unused clocks Jan 13 21:15:31.903619 kernel: Freeing unused kernel memory: 39360K Jan 13 21:15:31.903627 kernel: Run /init as init process Jan 13 21:15:31.903634 kernel: with arguments: Jan 13 21:15:31.903641 kernel: /init Jan 13 21:15:31.903661 kernel: with environment: Jan 13 21:15:31.903669 kernel: HOME=/ Jan 13 21:15:31.903676 kernel: TERM=linux Jan 13 21:15:31.903684 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:15:31.903693 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:15:31.903703 systemd[1]: Detected virtualization kvm. Jan 13 21:15:31.903711 systemd[1]: Detected architecture arm64. Jan 13 21:15:31.903718 systemd[1]: Running in initrd. Jan 13 21:15:31.903728 systemd[1]: No hostname configured, using default hostname. Jan 13 21:15:31.903736 systemd[1]: Hostname set to . Jan 13 21:15:31.903744 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:15:31.903752 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:15:31.903760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:15:31.903768 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:15:31.903776 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:15:31.903784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:15:31.903794 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:15:31.903802 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:15:31.903811 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:15:31.903819 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:15:31.903827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:15:31.903836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:15:31.903845 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:15:31.903853 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:15:31.903861 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:15:31.903869 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:15:31.903877 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:15:31.903885 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:15:31.903894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:15:31.903902 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:15:31.903910 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:15:31.903919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:15:31.903927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:15:31.903935 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:15:31.903943 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:15:31.903951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:15:31.903959 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:15:31.903967 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:15:31.903975 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:15:31.903983 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:15:31.903993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:15:31.904001 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:15:31.904009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:15:31.904017 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:15:31.904025 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:15:31.904035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:15:31.904064 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 21:15:31.904085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:15:31.904095 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:15:31.904103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:15:31.904112 systemd-journald[237]: Journal started Jan 13 21:15:31.904131 systemd-journald[237]: Runtime Journal (/run/log/journal/817603639dca46a9ad914e93d94296b9) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:15:31.893558 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 21:15:31.905700 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:15:31.910844 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:15:31.913122 kernel: Bridge firewalling registered Jan 13 21:15:31.912512 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 21:15:31.915858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:15:31.916883 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:15:31.917907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:15:31.919885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:15:31.923981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:15:31.925444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:15:31.926421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:15:31.934973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:15:31.938565 dracut-cmdline[270]: dracut-dracut-053 Jan 13 21:15:31.949370 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:15:31.947878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:15:31.979041 systemd-resolved[284]: Positive Trust Anchors: Jan 13 21:15:31.979059 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:15:31.979089 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:15:31.983763 systemd-resolved[284]: Defaulting to hostname 'linux'. Jan 13 21:15:31.984762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:15:31.986506 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:15:32.010723 kernel: SCSI subsystem initialized Jan 13 21:15:32.017668 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:15:32.022695 kernel: iscsi: registered transport (tcp) Jan 13 21:15:32.035726 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:15:32.035746 kernel: QLogic iSCSI HBA Driver Jan 13 21:15:32.079723 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:15:32.090851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:15:32.108641 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:15:32.108712 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:15:32.108725 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:15:32.155680 kernel: raid6: neonx8 gen() 15771 MB/s Jan 13 21:15:32.170670 kernel: raid6: neonx4 gen() 15643 MB/s Jan 13 21:15:32.187668 kernel: raid6: neonx2 gen() 13198 MB/s Jan 13 21:15:32.204673 kernel: raid6: neonx1 gen() 10467 MB/s Jan 13 21:15:32.221670 kernel: raid6: int64x8 gen() 6960 MB/s Jan 13 21:15:32.238669 kernel: raid6: int64x4 gen() 7328 MB/s Jan 13 21:15:32.255668 kernel: raid6: int64x2 gen() 6130 MB/s Jan 13 21:15:32.272671 kernel: raid6: int64x1 gen() 5056 MB/s Jan 13 21:15:32.272687 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Jan 13 21:15:32.289675 kernel: raid6: .... xor() 11929 MB/s, rmw enabled Jan 13 21:15:32.289690 kernel: raid6: using neon recovery algorithm Jan 13 21:15:32.294673 kernel: xor: measuring software checksum speed Jan 13 21:15:32.294693 kernel: 8regs : 19299 MB/sec Jan 13 21:15:32.296167 kernel: 32regs : 18186 MB/sec Jan 13 21:15:32.296181 kernel: arm64_neon : 26263 MB/sec Jan 13 21:15:32.296191 kernel: xor: using function: arm64_neon (26263 MB/sec) Jan 13 21:15:32.345682 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:15:32.356633 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:15:32.364849 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:15:32.375881 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 13 21:15:32.379820 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:15:32.387920 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:15:32.400329 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 13 21:15:32.427962 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:15:32.443832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:15:32.483311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:15:32.491846 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:15:32.503925 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:15:32.505562 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:15:32.507586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:15:32.508457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:15:32.518901 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:15:32.534705 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:15:32.547143 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:15:32.547261 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:15:32.547274 kernel: GPT:9289727 != 19775487 Jan 13 21:15:32.547290 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:15:32.547300 kernel: GPT:9289727 != 19775487 Jan 13 21:15:32.547311 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:15:32.547320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:15:32.534626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:15:32.543165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:15:32.543277 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:15:32.546733 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:15:32.547480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:15:32.547613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:15:32.549698 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:15:32.565200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:15:32.570690 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (521) Jan 13 21:15:32.572681 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Jan 13 21:15:32.577672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:15:32.583683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:15:32.588668 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:15:32.595888 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:15:32.596810 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:15:32.602877 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:15:32.617877 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:15:32.619524 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:15:32.624987 disk-uuid[549]: Primary Header is updated. Jan 13 21:15:32.624987 disk-uuid[549]: Secondary Entries is updated. Jan 13 21:15:32.624987 disk-uuid[549]: Secondary Header is updated. Jan 13 21:15:32.629935 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:15:32.644282 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:15:33.642178 disk-uuid[551]: The operation has completed successfully. Jan 13 21:15:33.643339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:15:33.666125 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:15:33.666254 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:15:33.689812 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:15:33.692612 sh[574]: Success Jan 13 21:15:33.711718 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:15:33.740335 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:15:33.753050 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:15:33.755710 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:15:33.764787 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:15:33.764825 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:15:33.764845 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:15:33.766151 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:15:33.766167 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:15:33.770105 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:15:33.771290 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:15:33.782815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:15:33.784221 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:15:33.792221 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:15:33.792271 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:15:33.792282 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:15:33.794707 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:15:33.802407 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:15:33.803671 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:15:33.810252 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:15:33.815847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:15:33.880909 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:15:33.893857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:15:33.914182 systemd-networkd[763]: lo: Link UP Jan 13 21:15:33.914191 systemd-networkd[763]: lo: Gained carrier Jan 13 21:15:33.914883 systemd-networkd[763]: Enumeration completed Jan 13 21:15:33.916219 ignition[665]: Ignition 2.19.0 Jan 13 21:15:33.914998 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:15:33.916225 ignition[665]: Stage: fetch-offline Jan 13 21:15:33.915306 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:15:33.916257 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:15:33.915309 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:15:33.916271 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:15:33.916142 systemd-networkd[763]: eth0: Link UP Jan 13 21:15:33.916443 ignition[665]: parsed url from cmdline: "" Jan 13 21:15:33.916145 systemd-networkd[763]: eth0: Gained carrier Jan 13 21:15:33.916447 ignition[665]: no config URL provided Jan 13 21:15:33.916152 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:15:33.916451 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:15:33.916473 systemd[1]: Reached target network.target - Network. Jan 13 21:15:33.916458 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:15:33.916481 ignition[665]: op(1): [started] loading QEMU firmware config module Jan 13 21:15:33.916486 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:15:33.928605 ignition[665]: op(1): [finished] loading QEMU firmware config module Jan 13 21:15:33.936734 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:15:33.967361 ignition[665]: parsing config with SHA512: 35d33582ba4adff831cd916267d0b166c7bac6755556f783f44adc14f591c73988e941eef24e543591cb15324e89d4caad96ca7f58e715f4033bde797945ae9a Jan 13 21:15:33.971283 unknown[665]: fetched base config from "system" Jan 13 21:15:33.971292 unknown[665]: fetched user config from "qemu" Jan 13 21:15:33.971712 ignition[665]: fetch-offline: fetch-offline passed Jan 13 21:15:33.973512 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:15:33.971773 ignition[665]: Ignition finished successfully Jan 13 21:15:33.974873 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:15:33.983825 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:15:33.993969 ignition[771]: Ignition 2.19.0 Jan 13 21:15:33.993979 ignition[771]: Stage: kargs Jan 13 21:15:33.994144 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:15:33.994154 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:15:33.995024 ignition[771]: kargs: kargs passed Jan 13 21:15:33.995069 ignition[771]: Ignition finished successfully Jan 13 21:15:33.997338 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:15:34.006817 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:15:34.016394 ignition[779]: Ignition 2.19.0 Jan 13 21:15:34.016405 ignition[779]: Stage: disks Jan 13 21:15:34.016556 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:15:34.016565 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:15:34.017509 ignition[779]: disks: disks passed Jan 13 21:15:34.017554 ignition[779]: Ignition finished successfully Jan 13 21:15:34.019451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:15:34.020701 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:15:34.021880 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:15:34.023357 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:15:34.024786 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:15:34.026022 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:15:34.037807 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:15:34.047357 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:15:34.051549 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:15:34.054722 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:15:34.100678 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:15:34.100737 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:15:34.101768 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:15:34.114744 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:15:34.116695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:15:34.117526 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:15:34.117567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:15:34.117590 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:15:34.123304 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:15:34.124852 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Jan 13 21:15:34.125081 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:15:34.128407 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:15:34.128425 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:15:34.128435 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:15:34.130666 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:15:34.131606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:15:34.167753 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:15:34.171538 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:15:34.175324 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:15:34.178849 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:15:34.248343 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:15:34.260762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:15:34.262141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:15:34.266667 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:15:34.281927 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:15:34.284182 ignition[910]: INFO : Ignition 2.19.0 Jan 13 21:15:34.284182 ignition[910]: INFO : Stage: mount Jan 13 21:15:34.285340 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:15:34.285340 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:15:34.285340 ignition[910]: INFO : mount: mount passed Jan 13 21:15:34.285340 ignition[910]: INFO : Ignition finished successfully Jan 13 21:15:34.286474 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:15:34.292800 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:15:34.764208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:15:34.774876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:15:34.781101 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Jan 13 21:15:34.781139 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:15:34.781151 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:15:34.782669 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:15:34.784683 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:15:34.785327 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:15:34.801289 ignition[941]: INFO : Ignition 2.19.0 Jan 13 21:15:34.801289 ignition[941]: INFO : Stage: files Jan 13 21:15:34.802559 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:15:34.802559 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:15:34.802559 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:15:34.805210 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:15:34.805210 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:15:34.805210 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:15:34.808264 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:15:34.808264 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:15:34.808264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:15:34.808264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:15:34.805625 unknown[941]: wrote ssh authorized keys file for user: core Jan 13 21:15:34.856911 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:15:35.038606 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:15:35.038606 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:15:35.041516 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 21:15:35.314842 systemd-networkd[763]: eth0: Gained IPv6LL Jan 13 21:15:35.378849 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:15:35.510584 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:15:35.510584 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:15:35.513264 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 21:15:35.770452 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:15:35.984419 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:15:35.984419 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:15:35.987725 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:15:35.989119 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:15:36.011067 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:15:36.014568 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:15:36.016780 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:15:36.016780 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:15:36.016780 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:15:36.016780 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:15:36.016780 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:15:36.016780 ignition[941]: INFO : files: files passed Jan 13 21:15:36.016780 ignition[941]: INFO : Ignition finished successfully Jan 13 21:15:36.017148 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:15:36.030837 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:15:36.033489 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:15:36.034589 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:15:36.034710 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:15:36.040952 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:15:36.044094 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:15:36.044094 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:15:36.046444 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:15:36.046956 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:15:36.048931 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:15:36.061825 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:15:36.082257 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:15:36.082370 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:15:36.083975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:15:36.085266 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:15:36.086578 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:15:36.095799 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:15:36.107478 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:15:36.109798 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:15:36.121053 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:15:36.121980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:15:36.123445 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:15:36.124703 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:15:36.124886 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:15:36.126672 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:15:36.128135 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:15:36.129331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:15:36.130531 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:15:36.131920 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:15:36.133468 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:15:36.134757 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:15:36.136141 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:15:36.137493 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:15:36.138723 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:15:36.139929 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:15:36.140048 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:15:36.141729 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:15:36.143140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:15:36.144488 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:15:36.145833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:15:36.146734 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:15:36.146847 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:15:36.148941 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:15:36.149064 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:15:36.150554 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:15:36.151695 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:15:36.156731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:15:36.158581 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:15:36.159339 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:15:36.160478 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:15:36.160570 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:15:36.161661 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:15:36.161751 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:15:36.162831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:15:36.162944 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:15:36.164181 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:15:36.164282 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:15:36.180836 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:15:36.182213 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:15:36.182857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:15:36.182974 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:15:36.184277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:15:36.184378 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:15:36.189407 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:15:36.189859 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:15:36.192949 ignition[995]: INFO : Ignition 2.19.0 Jan 13 21:15:36.192949 ignition[995]: INFO : Stage: umount Jan 13 21:15:36.194317 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:15:36.194317 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:15:36.194317 ignition[995]: INFO : umount: umount passed Jan 13 21:15:36.194317 ignition[995]: INFO : Ignition finished successfully Jan 13 21:15:36.196818 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:15:36.196923 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:15:36.198985 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:15:36.199398 systemd[1]: Stopped target network.target - Network. Jan 13 21:15:36.200194 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:15:36.200247 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:15:36.201455 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:15:36.201495 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:15:36.202597 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:15:36.202634 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:15:36.204169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:15:36.204216 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:15:36.205580 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:15:36.206809 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:15:36.212961 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:15:36.213097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:15:36.213730 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 13 21:15:36.215261 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:15:36.215320 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:15:36.216943 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:15:36.217041 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:15:36.219344 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:15:36.219401 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:15:36.232796 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:15:36.233464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:15:36.233528 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:15:36.235018 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:15:36.235063 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:15:36.236305 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:15:36.236345 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:15:36.238057 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:15:36.247390 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:15:36.247521 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:15:36.254058 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:15:36.254223 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:15:36.255753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:15:36.255794 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:15:36.258668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:15:36.258711 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:15:36.260292 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:15:36.260340 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:15:36.262066 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:15:36.262120 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:15:36.263926 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:15:36.263967 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:15:36.276807 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:15:36.277571 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:15:36.277625 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:15:36.279199 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:15:36.279239 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:15:36.280747 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:15:36.280800 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:15:36.282382 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:15:36.282422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:15:36.284143 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:15:36.285689 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:15:36.287070 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:15:36.287156 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:15:36.288904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:15:36.289688 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:15:36.289752 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:15:36.301794 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:15:36.307619 systemd[1]: Switching root. Jan 13 21:15:36.336930 systemd-journald[237]: Journal stopped Jan 13 21:15:37.043488 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 21:15:37.043543 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:15:37.043555 kernel: SELinux: policy capability open_perms=1 Jan 13 21:15:37.043565 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:15:37.043575 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:15:37.043587 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:15:37.043603 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:15:37.043612 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:15:37.043622 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:15:37.043631 kernel: audit: type=1403 audit(1736802936.492:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:15:37.043641 systemd[1]: Successfully loaded SELinux policy in 32.407ms. Jan 13 21:15:37.043681 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.475ms. Jan 13 21:15:37.043694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:15:37.043706 systemd[1]: Detected virtualization kvm. Jan 13 21:15:37.043725 systemd[1]: Detected architecture arm64. Jan 13 21:15:37.043736 systemd[1]: Detected first boot. Jan 13 21:15:37.043746 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:15:37.043757 zram_generator::config[1039]: No configuration found. Jan 13 21:15:37.043769 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:15:37.043779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:15:37.043789 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:15:37.043800 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:15:37.043813 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:15:37.043823 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:15:37.043834 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:15:37.043844 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:15:37.043856 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:15:37.043867 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:15:37.043878 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:15:37.043888 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:15:37.043900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:15:37.043911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:15:37.043921 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:15:37.043932 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:15:37.043942 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:15:37.043953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:15:37.043964 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:15:37.043974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:15:37.043989 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:15:37.043999 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:15:37.044011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:15:37.044022 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:15:37.044032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:15:37.044043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:15:37.044053 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:15:37.044070 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:15:37.044081 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:15:37.044094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:15:37.044106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:15:37.044116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:15:37.044127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:15:37.044138 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:15:37.044149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:15:37.044159 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:15:37.044169 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:15:37.044180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:15:37.044192 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:15:37.044202 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:15:37.044213 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:15:37.044224 systemd[1]: Reached target machines.target - Containers. Jan 13 21:15:37.044234 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:15:37.044244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:15:37.044255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:15:37.044265 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:15:37.044276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:15:37.044288 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:15:37.044298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:15:37.044309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:15:37.044319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:15:37.044331 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:15:37.044341 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:15:37.044352 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:15:37.044362 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:15:37.044374 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:15:37.044384 kernel: fuse: init (API version 7.39) Jan 13 21:15:37.044394 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:15:37.044404 kernel: loop: module loaded Jan 13 21:15:37.044414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:15:37.044426 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:15:37.044436 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:15:37.044446 kernel: ACPI: bus type drm_connector registered Jan 13 21:15:37.044473 systemd-journald[1103]: Collecting audit messages is disabled. Jan 13 21:15:37.044501 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:15:37.044515 systemd-journald[1103]: Journal started Jan 13 21:15:37.044538 systemd-journald[1103]: Runtime Journal (/run/log/journal/817603639dca46a9ad914e93d94296b9) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:15:36.880230 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:15:36.894105 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:15:36.894457 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:15:37.046784 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:15:37.046821 systemd[1]: Stopped verity-setup.service. Jan 13 21:15:37.050351 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:15:37.051140 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:15:37.052049 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:15:37.053112 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:15:37.053981 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:15:37.054873 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:15:37.055760 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:15:37.056701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:15:37.057957 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:15:37.058115 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:15:37.059350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:15:37.059481 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:15:37.060905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:15:37.061047 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:15:37.063232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:15:37.063401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:15:37.064558 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:15:37.065581 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:15:37.066877 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:15:37.068047 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:15:37.068279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:15:37.069397 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:15:37.070560 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:15:37.071981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:15:37.083700 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:15:37.090759 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:15:37.093020 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:15:37.093854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:15:37.093898 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:15:37.095588 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:15:37.099743 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:15:37.102179 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:15:37.103067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:15:37.104804 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:15:37.106865 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:15:37.110774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:15:37.112156 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:15:37.113232 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:15:37.114936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:15:37.119830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:15:37.120918 systemd-journald[1103]: Time spent on flushing to /var/log/journal/817603639dca46a9ad914e93d94296b9 is 14.927ms for 859 entries. Jan 13 21:15:37.120918 systemd-journald[1103]: System Journal (/var/log/journal/817603639dca46a9ad914e93d94296b9) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:15:37.148373 systemd-journald[1103]: Received client request to flush runtime journal. Jan 13 21:15:37.148436 kernel: loop0: detected capacity change from 0 to 114432 Jan 13 21:15:37.122253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:15:37.125070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:15:37.126115 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:15:37.127126 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:15:37.131815 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:15:37.138426 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:15:37.140751 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:15:37.142635 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:15:37.149964 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:15:37.152173 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:15:37.156440 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:15:37.160937 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:15:37.173988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:15:37.176967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:15:37.177638 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:15:37.180426 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 13 21:15:37.180447 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 13 21:15:37.186269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:15:37.193867 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:15:37.203673 kernel: loop1: detected capacity change from 0 to 114328 Jan 13 21:15:37.221490 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:15:37.228843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:15:37.238503 kernel: loop2: detected capacity change from 0 to 194512 Jan 13 21:15:37.244683 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 13 21:15:37.244701 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 13 21:15:37.249748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:15:37.273684 kernel: loop3: detected capacity change from 0 to 114432 Jan 13 21:15:37.278674 kernel: loop4: detected capacity change from 0 to 114328 Jan 13 21:15:37.282715 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 21:15:37.287995 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:15:37.288485 (sd-merge)[1178]: Merged extensions into '/usr'. Jan 13 21:15:37.297949 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:15:37.297964 systemd[1]: Reloading... Jan 13 21:15:37.350691 zram_generator::config[1204]: No configuration found. Jan 13 21:15:37.412901 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:15:37.444320 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:15:37.480192 systemd[1]: Reloading finished in 181 ms. Jan 13 21:15:37.511931 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:15:37.513248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:15:37.528817 systemd[1]: Starting ensure-sysext.service... Jan 13 21:15:37.530815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:15:37.541764 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:15:37.541891 systemd[1]: Reloading... Jan 13 21:15:37.554399 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:15:37.554683 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:15:37.555315 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:15:37.555540 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:15:37.555591 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:15:37.557767 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:15:37.557779 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:15:37.564442 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:15:37.564460 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:15:37.596687 zram_generator::config[1270]: No configuration found. Jan 13 21:15:37.677722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:15:37.713360 systemd[1]: Reloading finished in 171 ms. Jan 13 21:15:37.728831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:15:37.742079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:15:37.749190 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:15:37.751646 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:15:37.753600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:15:37.756949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:15:37.760974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:15:37.766354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:15:37.770422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:15:37.772300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:15:37.775270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:15:37.778534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:15:37.779783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:15:37.787981 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:15:37.789516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:15:37.789695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:15:37.792129 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:15:37.797692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:15:37.797857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:15:37.799329 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:15:37.799468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:15:37.802332 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 13 21:15:37.803971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:15:37.804197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:15:37.812936 augenrules[1333]: No rules Jan 13 21:15:37.814013 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:15:37.815315 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:15:37.816680 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:15:37.824315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:15:37.827800 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:15:37.831225 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:15:37.842155 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:15:37.845510 systemd[1]: Finished ensure-sysext.service. Jan 13 21:15:37.849793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:15:37.858979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:15:37.866038 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:15:37.869357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:15:37.871568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:15:37.872778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:15:37.874851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:15:37.878818 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:15:37.880807 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:15:37.881307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:15:37.881444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:15:37.882601 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:15:37.882748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:15:37.883783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:15:37.883904 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:15:37.885579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:15:37.885751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:15:37.893096 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 21:15:37.893875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:15:37.893940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:15:37.899814 systemd-resolved[1307]: Positive Trust Anchors: Jan 13 21:15:37.901767 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1363) Jan 13 21:15:37.902017 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:15:37.902064 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:15:37.913435 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jan 13 21:15:37.918542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:15:37.921738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:15:37.974907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:15:37.977089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:15:37.977992 systemd-networkd[1373]: lo: Link UP Jan 13 21:15:37.978004 systemd-networkd[1373]: lo: Gained carrier Jan 13 21:15:37.978742 systemd-networkd[1373]: Enumeration completed Jan 13 21:15:37.979544 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:15:37.979551 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:15:37.980417 systemd-networkd[1373]: eth0: Link UP Jan 13 21:15:37.980492 systemd-networkd[1373]: eth0: Gained carrier Jan 13 21:15:37.980543 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:15:37.980640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:15:37.982227 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:15:37.983195 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:15:37.986674 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:15:37.988227 systemd[1]: Reached target network.target - Network. Jan 13 21:15:37.989056 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:15:37.993757 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:15:37.994357 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Jan 13 21:15:37.510993 systemd-resolved[1307]: Clock change detected. Flushing caches. Jan 13 21:15:37.518786 systemd-journald[1103]: Time jumped backwards, rotating. Jan 13 21:15:37.511084 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:15:37.511144 systemd-timesyncd[1374]: Initial clock synchronization to Mon 2025-01-13 21:15:37.510956 UTC. Jan 13 21:15:37.512368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:15:37.515215 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:15:37.516576 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:15:37.536429 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:15:37.548005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:15:37.565594 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:15:37.566740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:15:37.567635 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:15:37.568476 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:15:37.569365 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:15:37.570391 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:15:37.571264 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:15:37.572147 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:15:37.572990 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:15:37.573024 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:15:37.573684 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:15:37.575240 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:15:37.577452 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:15:37.586025 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:15:37.588075 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:15:37.589392 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:15:37.590269 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:15:37.590941 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:15:37.591691 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:15:37.591720 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:15:37.592626 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:15:37.594451 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:15:37.595636 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:15:37.597736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:15:37.600383 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:15:37.601091 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:15:37.603333 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:15:37.604881 jq[1406]: false Jan 13 21:15:37.607261 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:15:37.612318 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:15:37.616263 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:15:37.620161 extend-filesystems[1407]: Found loop3 Jan 13 21:15:37.620161 extend-filesystems[1407]: Found loop4 Jan 13 21:15:37.620161 extend-filesystems[1407]: Found loop5 Jan 13 21:15:37.620161 extend-filesystems[1407]: Found vda Jan 13 21:15:37.620161 extend-filesystems[1407]: Found vda1 Jan 13 21:15:37.620161 extend-filesystems[1407]: Found vda2 Jan 13 21:15:37.620161 extend-filesystems[1407]: Found vda3 Jan 13 21:15:37.620161 extend-filesystems[1407]: Found usr Jan 13 21:15:37.628190 extend-filesystems[1407]: Found vda4 Jan 13 21:15:37.628190 extend-filesystems[1407]: Found vda6 Jan 13 21:15:37.628190 extend-filesystems[1407]: Found vda7 Jan 13 21:15:37.628190 extend-filesystems[1407]: Found vda9 Jan 13 21:15:37.628190 extend-filesystems[1407]: Checking size of /dev/vda9 Jan 13 21:15:37.622626 dbus-daemon[1405]: [system] SELinux support is enabled Jan 13 21:15:37.622315 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:15:37.627154 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:15:37.627637 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:15:37.630210 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:15:37.635469 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:15:37.637008 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:15:37.640319 jq[1424]: true Jan 13 21:15:37.642239 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:15:37.644152 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:15:37.644322 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:15:37.644600 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:15:37.646298 extend-filesystems[1407]: Resized partition /dev/vda9 Jan 13 21:15:37.644735 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:15:37.647593 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:15:37.648959 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:15:37.655135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1341) Jan 13 21:15:37.668071 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:15:37.676705 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:15:37.677075 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:15:37.678622 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:15:37.679843 tar[1429]: linux-arm64/helm Jan 13 21:15:37.678644 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:15:37.690128 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:15:37.689244 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:15:37.694718 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:15:37.696165 systemd-logind[1415]: New seat seat0. Jan 13 21:15:37.698410 jq[1431]: true Jan 13 21:15:37.698035 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:15:37.700240 update_engine[1423]: I20250113 21:15:37.698841 1423 main.cc:92] Flatcar Update Engine starting Jan 13 21:15:37.704887 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:15:37.705059 update_engine[1423]: I20250113 21:15:37.704937 1423 update_check_scheduler.cc:74] Next update check in 9m16s Jan 13 21:15:37.708374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:15:37.721418 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:15:37.739218 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:15:37.739218 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:15:37.739218 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:15:37.744201 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Jan 13 21:15:37.744430 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:15:37.744627 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:15:37.760843 bash[1459]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:15:37.762209 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:15:37.764625 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:15:37.766879 locksmithd[1445]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:15:37.887682 containerd[1436]: time="2025-01-13T21:15:37.887567904Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:15:37.916417 containerd[1436]: time="2025-01-13T21:15:37.916180264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.917677 containerd[1436]: time="2025-01-13T21:15:37.917631344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918158 containerd[1436]: time="2025-01-13T21:15:37.917798104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:15:37.918158 containerd[1436]: time="2025-01-13T21:15:37.917824544Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:15:37.918158 containerd[1436]: time="2025-01-13T21:15:37.917986704Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:15:37.918158 containerd[1436]: time="2025-01-13T21:15:37.918005224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918158 containerd[1436]: time="2025-01-13T21:15:37.918065584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918158 containerd[1436]: time="2025-01-13T21:15:37.918079944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918550 containerd[1436]: time="2025-01-13T21:15:37.918524704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918611 containerd[1436]: time="2025-01-13T21:15:37.918599264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918722 containerd[1436]: time="2025-01-13T21:15:37.918704824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918771 containerd[1436]: time="2025-01-13T21:15:37.918759184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.918962 containerd[1436]: time="2025-01-13T21:15:37.918942584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.919308 containerd[1436]: time="2025-01-13T21:15:37.919284144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:15:37.919940 containerd[1436]: time="2025-01-13T21:15:37.919529224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:15:37.919940 containerd[1436]: time="2025-01-13T21:15:37.919549464Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:15:37.919940 containerd[1436]: time="2025-01-13T21:15:37.919642464Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:15:37.919940 containerd[1436]: time="2025-01-13T21:15:37.919862664Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:15:37.947309 containerd[1436]: time="2025-01-13T21:15:37.947257504Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:15:37.947408 containerd[1436]: time="2025-01-13T21:15:37.947362744Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:15:37.947453 containerd[1436]: time="2025-01-13T21:15:37.947441104Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:15:37.947475 containerd[1436]: time="2025-01-13T21:15:37.947459744Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:15:37.947502 containerd[1436]: time="2025-01-13T21:15:37.947475744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:15:37.947723 containerd[1436]: time="2025-01-13T21:15:37.947697864Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:15:37.948079 containerd[1436]: time="2025-01-13T21:15:37.948059624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:15:37.948227 containerd[1436]: time="2025-01-13T21:15:37.948207544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:15:37.948257 containerd[1436]: time="2025-01-13T21:15:37.948229504Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:15:37.948257 containerd[1436]: time="2025-01-13T21:15:37.948243704Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:15:37.948304 containerd[1436]: time="2025-01-13T21:15:37.948257824Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948304 containerd[1436]: time="2025-01-13T21:15:37.948283864Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948304 containerd[1436]: time="2025-01-13T21:15:37.948298824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948354 containerd[1436]: time="2025-01-13T21:15:37.948317624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948354 containerd[1436]: time="2025-01-13T21:15:37.948332384Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948386 containerd[1436]: time="2025-01-13T21:15:37.948345864Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948386 containerd[1436]: time="2025-01-13T21:15:37.948365584Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948386 containerd[1436]: time="2025-01-13T21:15:37.948378544Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:15:37.948446 containerd[1436]: time="2025-01-13T21:15:37.948397144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948446 containerd[1436]: time="2025-01-13T21:15:37.948411624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948446 containerd[1436]: time="2025-01-13T21:15:37.948424744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948510 containerd[1436]: time="2025-01-13T21:15:37.948444944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948510 containerd[1436]: time="2025-01-13T21:15:37.948461784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948510 containerd[1436]: time="2025-01-13T21:15:37.948475064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948510 containerd[1436]: time="2025-01-13T21:15:37.948496224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948572 containerd[1436]: time="2025-01-13T21:15:37.948519544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948572 containerd[1436]: time="2025-01-13T21:15:37.948535504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948572 containerd[1436]: time="2025-01-13T21:15:37.948554304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948572 containerd[1436]: time="2025-01-13T21:15:37.948566504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948672 containerd[1436]: time="2025-01-13T21:15:37.948583784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948672 containerd[1436]: time="2025-01-13T21:15:37.948600944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948672 containerd[1436]: time="2025-01-13T21:15:37.948620024Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:15:37.948672 containerd[1436]: time="2025-01-13T21:15:37.948640824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948672 containerd[1436]: time="2025-01-13T21:15:37.948658944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948672 containerd[1436]: time="2025-01-13T21:15:37.948670224Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948790424Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948812784Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948829744Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948843104Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948852384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948864824Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948874344Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:15:37.948900 containerd[1436]: time="2025-01-13T21:15:37.948885464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:15:37.949373 containerd[1436]: time="2025-01-13T21:15:37.949304264Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:15:37.949475 containerd[1436]: time="2025-01-13T21:15:37.949380504Z" level=info msg="Connect containerd service" Jan 13 21:15:37.949475 containerd[1436]: time="2025-01-13T21:15:37.949406544Z" level=info msg="using legacy CRI server" Jan 13 21:15:37.949475 containerd[1436]: time="2025-01-13T21:15:37.949422664Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:15:37.949564 containerd[1436]: time="2025-01-13T21:15:37.949543704Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:15:37.950342 containerd[1436]: time="2025-01-13T21:15:37.950315864Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:15:37.951008 containerd[1436]: time="2025-01-13T21:15:37.950631064Z" level=info msg="Start subscribing containerd event" Jan 13 21:15:37.951008 containerd[1436]: time="2025-01-13T21:15:37.950684344Z" level=info msg="Start recovering state" Jan 13 21:15:37.951008 containerd[1436]: time="2025-01-13T21:15:37.950744224Z" level=info msg="Start event monitor" Jan 13 21:15:37.951008 containerd[1436]: time="2025-01-13T21:15:37.950754464Z" level=info msg="Start snapshots syncer" Jan 13 21:15:37.951008 containerd[1436]: time="2025-01-13T21:15:37.950764064Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:15:37.951008 containerd[1436]: time="2025-01-13T21:15:37.950772424Z" level=info msg="Start streaming server" Jan 13 21:15:37.951165 containerd[1436]: time="2025-01-13T21:15:37.951005624Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:15:37.951165 containerd[1436]: time="2025-01-13T21:15:37.951059424Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:15:37.951269 containerd[1436]: time="2025-01-13T21:15:37.951200784Z" level=info msg="containerd successfully booted in 0.064518s" Jan 13 21:15:37.951277 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:15:38.046668 tar[1429]: linux-arm64/LICENSE Jan 13 21:15:38.046838 tar[1429]: linux-arm64/README.md Jan 13 21:15:38.061175 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:15:38.734249 systemd-networkd[1373]: eth0: Gained IPv6LL Jan 13 21:15:38.736732 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:15:38.738216 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:15:38.750438 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:15:38.752638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:15:38.754586 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:15:38.772753 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:15:38.774592 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:15:38.776148 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:15:38.779680 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:15:39.255757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:15:39.259902 (kubelet)[1502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:15:39.416362 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:15:39.435616 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:15:39.444366 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:15:39.450423 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:15:39.452062 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:15:39.456033 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:15:39.468368 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:15:39.471233 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:15:39.473335 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:15:39.474574 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:15:39.475502 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:15:39.476356 systemd[1]: Startup finished in 545ms (kernel) + 4.791s (initrd) + 3.500s (userspace) = 8.837s. Jan 13 21:15:39.757144 kubelet[1502]: E0113 21:15:39.757042 1502 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:15:39.760026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:15:39.760208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:15:44.285210 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:15:44.286451 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:43510.service - OpenSSH per-connection server daemon (10.0.0.1:43510). Jan 13 21:15:44.350539 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 43510 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:44.352533 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:44.361947 systemd-logind[1415]: New session 1 of user core. Jan 13 21:15:44.362937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:15:44.372401 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:15:44.382621 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:15:44.386237 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:15:44.392853 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:15:44.463172 systemd[1537]: Queued start job for default target default.target. Jan 13 21:15:44.475084 systemd[1537]: Created slice app.slice - User Application Slice. Jan 13 21:15:44.475150 systemd[1537]: Reached target paths.target - Paths. Jan 13 21:15:44.475163 systemd[1537]: Reached target timers.target - Timers. Jan 13 21:15:44.476488 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:15:44.486817 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:15:44.486887 systemd[1537]: Reached target sockets.target - Sockets. Jan 13 21:15:44.486900 systemd[1537]: Reached target basic.target - Basic System. Jan 13 21:15:44.486937 systemd[1537]: Reached target default.target - Main User Target. Jan 13 21:15:44.486965 systemd[1537]: Startup finished in 88ms. Jan 13 21:15:44.487290 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:15:44.497310 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:15:44.557674 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:43520.service - OpenSSH per-connection server daemon (10.0.0.1:43520). Jan 13 21:15:44.593212 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 43520 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:44.594652 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:44.599792 systemd-logind[1415]: New session 2 of user core. Jan 13 21:15:44.608312 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:15:44.663031 sshd[1548]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:44.676214 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:43520.service: Deactivated successfully. Jan 13 21:15:44.678085 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:15:44.681668 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:15:44.688494 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:43524.service - OpenSSH per-connection server daemon (10.0.0.1:43524). Jan 13 21:15:44.689386 systemd-logind[1415]: Removed session 2. Jan 13 21:15:44.720809 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 43524 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:44.722197 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:44.727167 systemd-logind[1415]: New session 3 of user core. Jan 13 21:15:44.743292 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:15:44.793888 sshd[1555]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:44.806769 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:43524.service: Deactivated successfully. Jan 13 21:15:44.808244 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:15:44.810335 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:15:44.819576 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:43540.service - OpenSSH per-connection server daemon (10.0.0.1:43540). Jan 13 21:15:44.820768 systemd-logind[1415]: Removed session 3. Jan 13 21:15:44.850511 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 43540 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:44.851941 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:44.856193 systemd-logind[1415]: New session 4 of user core. Jan 13 21:15:44.867302 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:15:44.920969 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:44.933590 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:43540.service: Deactivated successfully. Jan 13 21:15:44.936638 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:15:44.938088 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:15:44.939560 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:43548.service - OpenSSH per-connection server daemon (10.0.0.1:43548). Jan 13 21:15:44.940294 systemd-logind[1415]: Removed session 4. Jan 13 21:15:44.978537 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 43548 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:44.980419 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:44.985004 systemd-logind[1415]: New session 5 of user core. Jan 13 21:15:44.998366 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:15:45.065337 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:15:45.065629 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:15:45.089157 sudo[1572]: pam_unix(sudo:session): session closed for user root Jan 13 21:15:45.092487 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:45.102918 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:43548.service: Deactivated successfully. Jan 13 21:15:45.105881 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:15:45.107868 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:15:45.124725 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Jan 13 21:15:45.125704 systemd-logind[1415]: Removed session 5. Jan 13 21:15:45.156911 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:45.158504 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:45.162323 systemd-logind[1415]: New session 6 of user core. Jan 13 21:15:45.179371 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:15:45.233248 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:15:45.233553 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:15:45.236908 sudo[1581]: pam_unix(sudo:session): session closed for user root Jan 13 21:15:45.242804 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:15:45.243457 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:15:45.267648 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:15:45.268982 auditctl[1584]: No rules Jan 13 21:15:45.269328 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:15:45.269523 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:15:45.272091 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:15:45.301353 augenrules[1602]: No rules Jan 13 21:15:45.302794 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:15:45.303881 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 13 21:15:45.306296 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:45.316532 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:43556.service: Deactivated successfully. Jan 13 21:15:45.318826 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:15:45.320204 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:15:45.332460 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:43558.service - OpenSSH per-connection server daemon (10.0.0.1:43558). Jan 13 21:15:45.333378 systemd-logind[1415]: Removed session 6. Jan 13 21:15:45.366248 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 43558 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:15:45.367686 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:45.372133 systemd-logind[1415]: New session 7 of user core. Jan 13 21:15:45.381357 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:15:45.435675 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:15:45.435943 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:15:45.750450 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:15:45.750463 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:15:46.009121 dockerd[1632]: time="2025-01-13T21:15:46.008352424Z" level=info msg="Starting up" Jan 13 21:15:46.142514 dockerd[1632]: time="2025-01-13T21:15:46.142470464Z" level=info msg="Loading containers: start." Jan 13 21:15:46.230137 kernel: Initializing XFRM netlink socket Jan 13 21:15:46.293625 systemd-networkd[1373]: docker0: Link UP Jan 13 21:15:46.308394 dockerd[1632]: time="2025-01-13T21:15:46.308288024Z" level=info msg="Loading containers: done." Jan 13 21:15:46.321019 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2315225265-merged.mount: Deactivated successfully. Jan 13 21:15:46.322017 dockerd[1632]: time="2025-01-13T21:15:46.321887624Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:15:46.322017 dockerd[1632]: time="2025-01-13T21:15:46.322002904Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:15:46.322150 dockerd[1632]: time="2025-01-13T21:15:46.322139184Z" level=info msg="Daemon has completed initialization" Jan 13 21:15:46.389306 dockerd[1632]: time="2025-01-13T21:15:46.389152184Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:15:46.389647 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:15:47.099381 containerd[1436]: time="2025-01-13T21:15:47.099332304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:15:47.887634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087849233.mount: Deactivated successfully. Jan 13 21:15:49.647219 containerd[1436]: time="2025-01-13T21:15:49.647165704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:49.651986 containerd[1436]: time="2025-01-13T21:15:49.651940384Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 21:15:49.653250 containerd[1436]: time="2025-01-13T21:15:49.653189384Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:49.658130 containerd[1436]: time="2025-01-13T21:15:49.657352024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:49.658851 containerd[1436]: time="2025-01-13T21:15:49.658362384Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.5589846s" Jan 13 21:15:49.658851 containerd[1436]: time="2025-01-13T21:15:49.658401224Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 21:15:49.677747 containerd[1436]: time="2025-01-13T21:15:49.677695144Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:15:49.872342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:15:49.884310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:15:49.980621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:15:49.985650 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:15:50.027536 kubelet[1857]: E0113 21:15:50.027440 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:15:50.030841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:15:50.030994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:15:52.842575 containerd[1436]: time="2025-01-13T21:15:52.841898864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:52.842867 containerd[1436]: time="2025-01-13T21:15:52.842574104Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 21:15:52.843140 containerd[1436]: time="2025-01-13T21:15:52.843068544Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:52.846161 containerd[1436]: time="2025-01-13T21:15:52.846129984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:52.847278 containerd[1436]: time="2025-01-13T21:15:52.847244984Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 3.16951148s" Jan 13 21:15:52.847278 containerd[1436]: time="2025-01-13T21:15:52.847281504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 21:15:52.866011 containerd[1436]: time="2025-01-13T21:15:52.865945904Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:15:54.321603 containerd[1436]: time="2025-01-13T21:15:54.321546424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:54.322071 containerd[1436]: time="2025-01-13T21:15:54.322026064Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 21:15:54.323009 containerd[1436]: time="2025-01-13T21:15:54.322969504Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:54.326139 containerd[1436]: time="2025-01-13T21:15:54.326089664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:54.327803 containerd[1436]: time="2025-01-13T21:15:54.327761264Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.46174996s" Jan 13 21:15:54.327842 containerd[1436]: time="2025-01-13T21:15:54.327803584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 21:15:54.346751 containerd[1436]: time="2025-01-13T21:15:54.346712024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:15:55.363441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848890488.mount: Deactivated successfully. Jan 13 21:15:55.790285 containerd[1436]: time="2025-01-13T21:15:55.790225384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:55.791356 containerd[1436]: time="2025-01-13T21:15:55.791332144Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 21:15:55.792194 containerd[1436]: time="2025-01-13T21:15:55.792163144Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:55.794262 containerd[1436]: time="2025-01-13T21:15:55.794215504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:55.794890 containerd[1436]: time="2025-01-13T21:15:55.794793224Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.44803896s" Jan 13 21:15:55.794890 containerd[1436]: time="2025-01-13T21:15:55.794836904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 21:15:55.812890 containerd[1436]: time="2025-01-13T21:15:55.812850984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:15:56.410417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100752267.mount: Deactivated successfully. Jan 13 21:15:57.331879 containerd[1436]: time="2025-01-13T21:15:57.331835024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:57.332782 containerd[1436]: time="2025-01-13T21:15:57.332366864Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:15:57.335132 containerd[1436]: time="2025-01-13T21:15:57.333538664Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:57.336828 containerd[1436]: time="2025-01-13T21:15:57.336778624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:57.340079 containerd[1436]: time="2025-01-13T21:15:57.339043264Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.5261554s" Jan 13 21:15:57.340079 containerd[1436]: time="2025-01-13T21:15:57.339083224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:15:57.357151 containerd[1436]: time="2025-01-13T21:15:57.357124944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:15:57.763075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547628600.mount: Deactivated successfully. Jan 13 21:15:57.767647 containerd[1436]: time="2025-01-13T21:15:57.767596064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:57.768051 containerd[1436]: time="2025-01-13T21:15:57.768009064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 21:15:57.768877 containerd[1436]: time="2025-01-13T21:15:57.768826064Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:57.770847 containerd[1436]: time="2025-01-13T21:15:57.770797264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:15:57.771816 containerd[1436]: time="2025-01-13T21:15:57.771782384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 414.62328ms" Jan 13 21:15:57.771816 containerd[1436]: time="2025-01-13T21:15:57.771815024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:15:57.789477 containerd[1436]: time="2025-01-13T21:15:57.789254624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:15:58.431749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140191314.mount: Deactivated successfully. Jan 13 21:16:00.122368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:16:00.136292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:00.220195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:00.223668 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:16:00.262980 kubelet[2010]: E0113 21:16:00.262919 2010 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:16:00.265801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:16:00.265956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:16:01.666216 containerd[1436]: time="2025-01-13T21:16:01.666144464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:01.666659 containerd[1436]: time="2025-01-13T21:16:01.666608544Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 21:16:01.667523 containerd[1436]: time="2025-01-13T21:16:01.667487384Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:01.670666 containerd[1436]: time="2025-01-13T21:16:01.670626104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:01.673740 containerd[1436]: time="2025-01-13T21:16:01.673507904Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.88421336s" Jan 13 21:16:01.673740 containerd[1436]: time="2025-01-13T21:16:01.673558464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 21:16:08.706086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:08.721537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:08.745619 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-7.scope)... Jan 13 21:16:08.745640 systemd[1]: Reloading... Jan 13 21:16:08.816140 zram_generator::config[2147]: No configuration found. Jan 13 21:16:08.927970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:16:08.980334 systemd[1]: Reloading finished in 234 ms. Jan 13 21:16:09.028453 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:09.031201 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:16:09.031396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:09.032982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:09.121750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:09.126208 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:16:09.164805 kubelet[2188]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:16:09.164805 kubelet[2188]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:16:09.164805 kubelet[2188]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:16:09.165161 kubelet[2188]: I0113 21:16:09.164847 2188 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:16:09.914126 kubelet[2188]: I0113 21:16:09.914077 2188 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:16:09.914126 kubelet[2188]: I0113 21:16:09.914124 2188 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:16:09.915323 kubelet[2188]: I0113 21:16:09.914640 2188 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:16:09.949159 kubelet[2188]: I0113 21:16:09.949130 2188 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:16:09.949229 kubelet[2188]: E0113 21:16:09.949213 2188 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.957391 kubelet[2188]: I0113 21:16:09.957346 2188 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:16:09.957594 kubelet[2188]: I0113 21:16:09.957566 2188 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:16:09.957755 kubelet[2188]: I0113 21:16:09.957730 2188 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:16:09.957755 kubelet[2188]: I0113 21:16:09.957750 2188 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:16:09.957846 kubelet[2188]: I0113 21:16:09.957759 2188 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:16:09.958859 kubelet[2188]: I0113 21:16:09.958824 2188 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:16:09.960904 kubelet[2188]: I0113 21:16:09.960858 2188 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:16:09.960904 kubelet[2188]: I0113 21:16:09.960888 2188 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:16:09.960904 kubelet[2188]: I0113 21:16:09.960910 2188 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:16:09.961965 kubelet[2188]: I0113 21:16:09.960929 2188 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:16:09.961965 kubelet[2188]: W0113 21:16:09.961367 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.961965 kubelet[2188]: E0113 21:16:09.961423 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.961965 kubelet[2188]: W0113 21:16:09.961723 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.961965 kubelet[2188]: E0113 21:16:09.961754 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.962703 kubelet[2188]: I0113 21:16:09.962505 2188 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:16:09.963299 kubelet[2188]: I0113 21:16:09.963129 2188 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:16:09.963720 kubelet[2188]: W0113 21:16:09.963700 2188 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:16:09.967611 kubelet[2188]: I0113 21:16:09.967204 2188 server.go:1256] "Started kubelet" Jan 13 21:16:09.967611 kubelet[2188]: I0113 21:16:09.967378 2188 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:16:09.968029 kubelet[2188]: I0113 21:16:09.967722 2188 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:16:09.968029 kubelet[2188]: I0113 21:16:09.967806 2188 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:16:09.968697 kubelet[2188]: I0113 21:16:09.968659 2188 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:16:09.969174 kubelet[2188]: I0113 21:16:09.969025 2188 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:16:09.973130 kubelet[2188]: I0113 21:16:09.971764 2188 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:16:09.973130 kubelet[2188]: I0113 21:16:09.971984 2188 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:16:09.973130 kubelet[2188]: I0113 21:16:09.972125 2188 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:16:09.973130 kubelet[2188]: W0113 21:16:09.972584 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.973130 kubelet[2188]: E0113 21:16:09.972630 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.973130 kubelet[2188]: E0113 21:16:09.972839 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Jan 13 21:16:09.973339 kubelet[2188]: E0113 21:16:09.973185 2188 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d1965b51850 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:16:09.966680144 +0000 UTC m=+0.837173881,LastTimestamp:2025-01-13 21:16:09.966680144 +0000 UTC m=+0.837173881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:16:09.973428 kubelet[2188]: I0113 21:16:09.973404 2188 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:16:09.973586 kubelet[2188]: I0113 21:16:09.973552 2188 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:16:09.974492 kubelet[2188]: E0113 21:16:09.974462 2188 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:16:09.975132 kubelet[2188]: I0113 21:16:09.975090 2188 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:16:09.988016 kubelet[2188]: I0113 21:16:09.987991 2188 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:16:09.988016 kubelet[2188]: I0113 21:16:09.988036 2188 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:16:09.988016 kubelet[2188]: I0113 21:16:09.988059 2188 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:16:09.989978 kubelet[2188]: I0113 21:16:09.989950 2188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:16:09.990999 kubelet[2188]: I0113 21:16:09.990968 2188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:16:09.990999 kubelet[2188]: I0113 21:16:09.990990 2188 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:16:09.990999 kubelet[2188]: I0113 21:16:09.991005 2188 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:16:09.991120 kubelet[2188]: E0113 21:16:09.991051 2188 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:16:09.991589 kubelet[2188]: W0113 21:16:09.991558 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:09.991689 kubelet[2188]: E0113 21:16:09.991594 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:10.051858 kubelet[2188]: I0113 21:16:10.051807 2188 policy_none.go:49] "None policy: Start" Jan 13 21:16:10.052637 kubelet[2188]: I0113 21:16:10.052618 2188 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:16:10.052679 kubelet[2188]: I0113 21:16:10.052665 2188 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:16:10.060768 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:16:10.072964 kubelet[2188]: I0113 21:16:10.072921 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:16:10.073349 kubelet[2188]: E0113 21:16:10.073321 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 13 21:16:10.073727 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:16:10.091119 kubelet[2188]: E0113 21:16:10.091089 2188 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:16:10.091325 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:16:10.092477 kubelet[2188]: I0113 21:16:10.092462 2188 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:16:10.092765 kubelet[2188]: I0113 21:16:10.092701 2188 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:16:10.093868 kubelet[2188]: E0113 21:16:10.093836 2188 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:16:10.174219 kubelet[2188]: E0113 21:16:10.174122 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Jan 13 21:16:10.275458 kubelet[2188]: I0113 21:16:10.275418 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:16:10.275760 kubelet[2188]: E0113 21:16:10.275743 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 13 21:16:10.291938 kubelet[2188]: I0113 21:16:10.291852 2188 topology_manager.go:215] "Topology Admit Handler" podUID="4b1f81689972d8cee6f380f99f4d0596" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:16:10.292849 kubelet[2188]: I0113 21:16:10.292794 2188 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:16:10.293678 kubelet[2188]: I0113 21:16:10.293647 2188 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:16:10.298554 systemd[1]: Created slice kubepods-burstable-pod4b1f81689972d8cee6f380f99f4d0596.slice - libcontainer container kubepods-burstable-pod4b1f81689972d8cee6f380f99f4d0596.slice. Jan 13 21:16:10.317875 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 21:16:10.320869 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 21:16:10.374329 kubelet[2188]: I0113 21:16:10.374214 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b1f81689972d8cee6f380f99f4d0596-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1f81689972d8cee6f380f99f4d0596\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:10.374329 kubelet[2188]: I0113 21:16:10.374255 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:10.374329 kubelet[2188]: I0113 21:16:10.374281 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:16:10.374329 kubelet[2188]: I0113 21:16:10.374302 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:10.374329 kubelet[2188]: I0113 21:16:10.374328 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:10.374674 kubelet[2188]: I0113 21:16:10.374372 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:10.374674 kubelet[2188]: I0113 21:16:10.374415 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b1f81689972d8cee6f380f99f4d0596-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1f81689972d8cee6f380f99f4d0596\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:10.374674 kubelet[2188]: I0113 21:16:10.374466 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b1f81689972d8cee6f380f99f4d0596-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b1f81689972d8cee6f380f99f4d0596\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:10.374674 kubelet[2188]: I0113 21:16:10.374500 2188 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:10.575378 kubelet[2188]: E0113 21:16:10.575284 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Jan 13 21:16:10.616690 kubelet[2188]: E0113 21:16:10.616442 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:10.618923 containerd[1436]: time="2025-01-13T21:16:10.618880558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b1f81689972d8cee6f380f99f4d0596,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:10.620026 kubelet[2188]: E0113 21:16:10.620009 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:10.620385 containerd[1436]: time="2025-01-13T21:16:10.620350494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:10.622973 kubelet[2188]: E0113 21:16:10.622951 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:10.623292 containerd[1436]: time="2025-01-13T21:16:10.623263685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:10.676886 kubelet[2188]: I0113 21:16:10.676845 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:16:10.677216 kubelet[2188]: E0113 21:16:10.677178 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 13 21:16:11.154013 kubelet[2188]: W0113 21:16:11.153951 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.154013 kubelet[2188]: E0113 21:16:11.153997 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.273481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307902091.mount: Deactivated successfully. Jan 13 21:16:11.278482 containerd[1436]: time="2025-01-13T21:16:11.278442942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:16:11.279321 containerd[1436]: time="2025-01-13T21:16:11.279291412Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:16:11.280438 containerd[1436]: time="2025-01-13T21:16:11.279954875Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:16:11.281517 containerd[1436]: time="2025-01-13T21:16:11.281482290Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:16:11.281935 containerd[1436]: time="2025-01-13T21:16:11.281903105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:16:11.282427 containerd[1436]: time="2025-01-13T21:16:11.282401443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:16:11.283671 containerd[1436]: time="2025-01-13T21:16:11.283630966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:16:11.284391 containerd[1436]: time="2025-01-13T21:16:11.284349472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:16:11.286617 containerd[1436]: time="2025-01-13T21:16:11.286593872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 667.63635ms" Jan 13 21:16:11.287512 containerd[1436]: time="2025-01-13T21:16:11.287477783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 664.154056ms" Jan 13 21:16:11.290562 containerd[1436]: time="2025-01-13T21:16:11.290531492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 670.121516ms" Jan 13 21:16:11.353992 kubelet[2188]: W0113 21:16:11.353923 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.353992 kubelet[2188]: E0113 21:16:11.353988 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.360146 kubelet[2188]: W0113 21:16:11.360078 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.360146 kubelet[2188]: E0113 21:16:11.360148 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.380236 kubelet[2188]: W0113 21:16:11.380200 2188 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.380236 kubelet[2188]: E0113 21:16:11.380235 2188 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 13 21:16:11.380495 kubelet[2188]: E0113 21:16:11.380464 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" Jan 13 21:16:11.435509 containerd[1436]: time="2025-01-13T21:16:11.435283044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:11.435509 containerd[1436]: time="2025-01-13T21:16:11.435324686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:11.435509 containerd[1436]: time="2025-01-13T21:16:11.435348126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:11.435509 containerd[1436]: time="2025-01-13T21:16:11.435210162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:11.435509 containerd[1436]: time="2025-01-13T21:16:11.435486131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:11.436331 containerd[1436]: time="2025-01-13T21:16:11.435323886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:11.436331 containerd[1436]: time="2025-01-13T21:16:11.436292160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:11.436489 containerd[1436]: time="2025-01-13T21:16:11.436378763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:11.436839 containerd[1436]: time="2025-01-13T21:16:11.436767057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:11.437931 containerd[1436]: time="2025-01-13T21:16:11.437340597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:11.437931 containerd[1436]: time="2025-01-13T21:16:11.437389719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:11.437931 containerd[1436]: time="2025-01-13T21:16:11.437519204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:11.459314 systemd[1]: Started cri-containerd-89ef12c7f8d47dec72c55c376a02f18d12a73f150bc81d94656259bec0f59e29.scope - libcontainer container 89ef12c7f8d47dec72c55c376a02f18d12a73f150bc81d94656259bec0f59e29. Jan 13 21:16:11.460755 systemd[1]: Started cri-containerd-b39401175ee1011c33aca35c5ee37117f592522855e817f188b56a96093fc6ff.scope - libcontainer container b39401175ee1011c33aca35c5ee37117f592522855e817f188b56a96093fc6ff. Jan 13 21:16:11.461717 systemd[1]: Started cri-containerd-f04a2787a32928230be6f13df38b7a1324b3f1df1164f01e16e01719be928c9e.scope - libcontainer container f04a2787a32928230be6f13df38b7a1324b3f1df1164f01e16e01719be928c9e. Jan 13 21:16:11.479603 kubelet[2188]: I0113 21:16:11.479572 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:16:11.480118 kubelet[2188]: E0113 21:16:11.480081 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 13 21:16:11.494502 containerd[1436]: time="2025-01-13T21:16:11.494456550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"89ef12c7f8d47dec72c55c376a02f18d12a73f150bc81d94656259bec0f59e29\"" Jan 13 21:16:11.496537 kubelet[2188]: E0113 21:16:11.496467 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:11.499331 containerd[1436]: time="2025-01-13T21:16:11.499293803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f04a2787a32928230be6f13df38b7a1324b3f1df1164f01e16e01719be928c9e\"" Jan 13 21:16:11.499587 containerd[1436]: time="2025-01-13T21:16:11.499543611Z" level=info msg="CreateContainer within sandbox \"89ef12c7f8d47dec72c55c376a02f18d12a73f150bc81d94656259bec0f59e29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:16:11.500493 kubelet[2188]: E0113 21:16:11.500473 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:11.501810 containerd[1436]: time="2025-01-13T21:16:11.501773331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b1f81689972d8cee6f380f99f4d0596,Namespace:kube-system,Attempt:0,} returns sandbox id \"b39401175ee1011c33aca35c5ee37117f592522855e817f188b56a96093fc6ff\"" Jan 13 21:16:11.502515 containerd[1436]: time="2025-01-13T21:16:11.502484196Z" level=info msg="CreateContainer within sandbox \"f04a2787a32928230be6f13df38b7a1324b3f1df1164f01e16e01719be928c9e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:16:11.503032 kubelet[2188]: E0113 21:16:11.502804 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:11.506310 containerd[1436]: time="2025-01-13T21:16:11.506252450Z" level=info msg="CreateContainer within sandbox \"b39401175ee1011c33aca35c5ee37117f592522855e817f188b56a96093fc6ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:16:11.516376 containerd[1436]: time="2025-01-13T21:16:11.516324649Z" level=info msg="CreateContainer within sandbox \"89ef12c7f8d47dec72c55c376a02f18d12a73f150bc81d94656259bec0f59e29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac67906d155030f27f184aade0355c9acb04d77a298905101803dc30ad2ecad2\"" Jan 13 21:16:11.517059 containerd[1436]: time="2025-01-13T21:16:11.517030474Z" level=info msg="StartContainer for \"ac67906d155030f27f184aade0355c9acb04d77a298905101803dc30ad2ecad2\"" Jan 13 21:16:11.518797 containerd[1436]: time="2025-01-13T21:16:11.518758735Z" level=info msg="CreateContainer within sandbox \"f04a2787a32928230be6f13df38b7a1324b3f1df1164f01e16e01719be928c9e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dbe103f3730106eee3982930bfaece5048b55446a4264687b96eaf37b5033e1b\"" Jan 13 21:16:11.519191 containerd[1436]: time="2025-01-13T21:16:11.519149509Z" level=info msg="StartContainer for \"dbe103f3730106eee3982930bfaece5048b55446a4264687b96eaf37b5033e1b\"" Jan 13 21:16:11.521418 containerd[1436]: time="2025-01-13T21:16:11.521382949Z" level=info msg="CreateContainer within sandbox \"b39401175ee1011c33aca35c5ee37117f592522855e817f188b56a96093fc6ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1558cab431955c4ef06ef685c4d0456c0b7e017b993da313776aa845ed887f11\"" Jan 13 21:16:11.521995 containerd[1436]: time="2025-01-13T21:16:11.521844685Z" level=info msg="StartContainer for \"1558cab431955c4ef06ef685c4d0456c0b7e017b993da313776aa845ed887f11\"" Jan 13 21:16:11.545285 systemd[1]: Started cri-containerd-dbe103f3730106eee3982930bfaece5048b55446a4264687b96eaf37b5033e1b.scope - libcontainer container dbe103f3730106eee3982930bfaece5048b55446a4264687b96eaf37b5033e1b. Jan 13 21:16:11.549283 systemd[1]: Started cri-containerd-1558cab431955c4ef06ef685c4d0456c0b7e017b993da313776aa845ed887f11.scope - libcontainer container 1558cab431955c4ef06ef685c4d0456c0b7e017b993da313776aa845ed887f11. Jan 13 21:16:11.550137 systemd[1]: Started cri-containerd-ac67906d155030f27f184aade0355c9acb04d77a298905101803dc30ad2ecad2.scope - libcontainer container ac67906d155030f27f184aade0355c9acb04d77a298905101803dc30ad2ecad2. Jan 13 21:16:11.586670 containerd[1436]: time="2025-01-13T21:16:11.586621071Z" level=info msg="StartContainer for \"dbe103f3730106eee3982930bfaece5048b55446a4264687b96eaf37b5033e1b\" returns successfully" Jan 13 21:16:11.606127 containerd[1436]: time="2025-01-13T21:16:11.600935900Z" level=info msg="StartContainer for \"1558cab431955c4ef06ef685c4d0456c0b7e017b993da313776aa845ed887f11\" returns successfully" Jan 13 21:16:11.606127 containerd[1436]: time="2025-01-13T21:16:11.601026064Z" level=info msg="StartContainer for \"ac67906d155030f27f184aade0355c9acb04d77a298905101803dc30ad2ecad2\" returns successfully" Jan 13 21:16:12.000952 kubelet[2188]: E0113 21:16:12.000924 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:12.004141 kubelet[2188]: E0113 21:16:12.002268 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:12.004141 kubelet[2188]: E0113 21:16:12.002339 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:13.003776 kubelet[2188]: E0113 21:16:13.003743 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:13.004451 kubelet[2188]: E0113 21:16:13.004432 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:13.081527 kubelet[2188]: I0113 21:16:13.081496 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:16:13.129628 kubelet[2188]: I0113 21:16:13.129587 2188 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:16:13.150657 kubelet[2188]: E0113 21:16:13.150628 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.253544 kubelet[2188]: E0113 21:16:13.251514 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.354568 kubelet[2188]: E0113 21:16:13.354218 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.454797 kubelet[2188]: E0113 21:16:13.454756 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.555192 kubelet[2188]: E0113 21:16:13.555150 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.655697 kubelet[2188]: E0113 21:16:13.655660 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.756365 kubelet[2188]: E0113 21:16:13.756335 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.856812 kubelet[2188]: E0113 21:16:13.856780 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:16:13.963694 kubelet[2188]: I0113 21:16:13.963603 2188 apiserver.go:52] "Watching apiserver" Jan 13 21:16:13.972788 kubelet[2188]: I0113 21:16:13.972738 2188 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:16:15.751446 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Jan 13 21:16:15.751781 systemd[1]: Reloading... Jan 13 21:16:15.805137 zram_generator::config[2508]: No configuration found. Jan 13 21:16:15.890572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:16:15.955034 systemd[1]: Reloading finished in 202 ms. Jan 13 21:16:15.986589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:15.996955 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:16:15.997789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:15.997840 systemd[1]: kubelet.service: Consumed 1.170s CPU time, 115.8M memory peak, 0B memory swap peak. Jan 13 21:16:16.007376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:16:16.095402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:16:16.099156 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:16:16.146725 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:16:16.146725 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:16:16.146725 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:16:16.147069 kubelet[2550]: I0113 21:16:16.146788 2550 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:16:16.151513 kubelet[2550]: I0113 21:16:16.151479 2550 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:16:16.151513 kubelet[2550]: I0113 21:16:16.151510 2550 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:16:16.151711 kubelet[2550]: I0113 21:16:16.151695 2550 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:16:16.153290 kubelet[2550]: I0113 21:16:16.153263 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:16:16.155200 kubelet[2550]: I0113 21:16:16.155047 2550 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:16:16.163961 kubelet[2550]: I0113 21:16:16.163939 2550 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:16:16.164149 kubelet[2550]: I0113 21:16:16.164136 2550 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:16:16.164300 kubelet[2550]: I0113 21:16:16.164285 2550 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:16:16.164392 kubelet[2550]: I0113 21:16:16.164309 2550 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:16:16.164392 kubelet[2550]: I0113 21:16:16.164318 2550 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:16:16.164392 kubelet[2550]: I0113 21:16:16.164342 2550 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:16:16.164454 kubelet[2550]: I0113 21:16:16.164435 2550 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:16:16.164454 kubelet[2550]: I0113 21:16:16.164454 2550 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:16:16.164876 kubelet[2550]: I0113 21:16:16.164473 2550 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:16:16.164876 kubelet[2550]: I0113 21:16:16.164487 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:16:16.165747 kubelet[2550]: I0113 21:16:16.165711 2550 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:16:16.165994 kubelet[2550]: I0113 21:16:16.165967 2550 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:16:16.166539 kubelet[2550]: I0113 21:16:16.166509 2550 server.go:1256] "Started kubelet" Jan 13 21:16:16.167442 kubelet[2550]: I0113 21:16:16.167412 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:16:16.167608 kubelet[2550]: I0113 21:16:16.167588 2550 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:16:16.167733 kubelet[2550]: I0113 21:16:16.167707 2550 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:16:16.170924 kubelet[2550]: I0113 21:16:16.170892 2550 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:16:16.176374 kubelet[2550]: E0113 21:16:16.175076 2550 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:16:16.177391 kubelet[2550]: I0113 21:16:16.168866 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:16:16.178791 sudo[2565]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:16:16.179085 sudo[2565]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:16:16.179381 kubelet[2550]: I0113 21:16:16.179352 2550 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:16:16.182699 kubelet[2550]: I0113 21:16:16.182666 2550 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:16:16.182826 kubelet[2550]: I0113 21:16:16.182810 2550 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:16:16.185355 kubelet[2550]: I0113 21:16:16.185060 2550 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:16:16.185355 kubelet[2550]: I0113 21:16:16.185174 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:16:16.186539 kubelet[2550]: I0113 21:16:16.186458 2550 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:16:16.199727 kubelet[2550]: I0113 21:16:16.199699 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:16:16.201949 kubelet[2550]: I0113 21:16:16.200879 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:16:16.201949 kubelet[2550]: I0113 21:16:16.200904 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:16:16.201949 kubelet[2550]: I0113 21:16:16.200920 2550 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:16:16.201949 kubelet[2550]: E0113 21:16:16.201163 2550 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:16:16.218923 kubelet[2550]: I0113 21:16:16.218891 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:16:16.218923 kubelet[2550]: I0113 21:16:16.218914 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:16:16.218923 kubelet[2550]: I0113 21:16:16.218939 2550 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:16:16.219169 kubelet[2550]: I0113 21:16:16.219074 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:16:16.219169 kubelet[2550]: I0113 21:16:16.219098 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:16:16.219169 kubelet[2550]: I0113 21:16:16.219121 2550 policy_none.go:49] "None policy: Start" Jan 13 21:16:16.220037 kubelet[2550]: I0113 21:16:16.219746 2550 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:16:16.220037 kubelet[2550]: I0113 21:16:16.219779 2550 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:16:16.220037 kubelet[2550]: I0113 21:16:16.219969 2550 state_mem.go:75] "Updated machine memory state" Jan 13 21:16:16.223675 kubelet[2550]: I0113 21:16:16.223651 2550 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:16:16.224663 kubelet[2550]: I0113 21:16:16.224554 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:16:16.288187 kubelet[2550]: I0113 21:16:16.286772 2550 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:16:16.293511 kubelet[2550]: I0113 21:16:16.293323 2550 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:16:16.294278 kubelet[2550]: I0113 21:16:16.294253 2550 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:16:16.301446 kubelet[2550]: I0113 21:16:16.301258 2550 topology_manager.go:215] "Topology Admit Handler" podUID="4b1f81689972d8cee6f380f99f4d0596" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:16:16.302280 kubelet[2550]: I0113 21:16:16.302016 2550 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:16:16.302280 kubelet[2550]: I0113 21:16:16.302083 2550 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:16:16.483960 kubelet[2550]: I0113 21:16:16.483917 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b1f81689972d8cee6f380f99f4d0596-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1f81689972d8cee6f380f99f4d0596\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:16.483960 kubelet[2550]: I0113 21:16:16.483971 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:16.484092 kubelet[2550]: I0113 21:16:16.483993 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:16.484092 kubelet[2550]: I0113 21:16:16.484016 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:16:16.484092 kubelet[2550]: I0113 21:16:16.484040 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:16.484092 kubelet[2550]: I0113 21:16:16.484058 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b1f81689972d8cee6f380f99f4d0596-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1f81689972d8cee6f380f99f4d0596\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:16.484092 kubelet[2550]: I0113 21:16:16.484078 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b1f81689972d8cee6f380f99f4d0596-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b1f81689972d8cee6f380f99f4d0596\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:16.484240 kubelet[2550]: I0113 21:16:16.484124 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:16.484240 kubelet[2550]: I0113 21:16:16.484159 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:16.613301 kubelet[2550]: E0113 21:16:16.611375 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:16.613301 kubelet[2550]: E0113 21:16:16.612342 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:16.613639 kubelet[2550]: E0113 21:16:16.613578 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:16.630236 sudo[2565]: pam_unix(sudo:session): session closed for user root Jan 13 21:16:17.165118 kubelet[2550]: I0113 21:16:17.165076 2550 apiserver.go:52] "Watching apiserver" Jan 13 21:16:17.183734 kubelet[2550]: I0113 21:16:17.183700 2550 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:16:17.212361 kubelet[2550]: E0113 21:16:17.211138 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:17.215540 kubelet[2550]: E0113 21:16:17.215516 2550 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:16:17.216892 kubelet[2550]: E0113 21:16:17.216043 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:17.217565 kubelet[2550]: E0113 21:16:17.217541 2550 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:16:17.217996 kubelet[2550]: E0113 21:16:17.217974 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:17.234197 kubelet[2550]: I0113 21:16:17.233886 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.233759493 podStartE2EDuration="1.233759493s" podCreationTimestamp="2025-01-13 21:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:17.232741109 +0000 UTC m=+1.130405207" watchObservedRunningTime="2025-01-13 21:16:17.233759493 +0000 UTC m=+1.131423551" Jan 13 21:16:17.251797 kubelet[2550]: I0113 21:16:17.251732 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2516835259999999 podStartE2EDuration="1.251683526s" podCreationTimestamp="2025-01-13 21:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:17.239764918 +0000 UTC m=+1.137428976" watchObservedRunningTime="2025-01-13 21:16:17.251683526 +0000 UTC m=+1.149347584" Jan 13 21:16:17.251929 kubelet[2550]: I0113 21:16:17.251815 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.251799449 podStartE2EDuration="1.251799449s" podCreationTimestamp="2025-01-13 21:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:17.251495282 +0000 UTC m=+1.149159420" watchObservedRunningTime="2025-01-13 21:16:17.251799449 +0000 UTC m=+1.149463547" Jan 13 21:16:18.212072 kubelet[2550]: E0113 21:16:18.212034 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:18.212072 kubelet[2550]: E0113 21:16:18.212063 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:18.866826 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 13 21:16:18.868922 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:18.871713 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:43558.service: Deactivated successfully. Jan 13 21:16:18.873217 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:16:18.873365 systemd[1]: session-7.scope: Consumed 10.139s CPU time, 186.7M memory peak, 0B memory swap peak. Jan 13 21:16:18.874392 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:16:18.875417 systemd-logind[1415]: Removed session 7. Jan 13 21:16:19.214083 kubelet[2550]: E0113 21:16:19.214051 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:19.608803 kubelet[2550]: E0113 21:16:19.608650 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:22.038127 kubelet[2550]: E0113 21:16:22.038085 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:22.218816 kubelet[2550]: E0113 21:16:22.218790 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:22.640671 update_engine[1423]: I20250113 21:16:22.640595 1423 update_attempter.cc:509] Updating boot flags... Jan 13 21:16:22.663136 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2637) Jan 13 21:16:22.695135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2637) Jan 13 21:16:22.714135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2637) Jan 13 21:16:23.220275 kubelet[2550]: E0113 21:16:23.220206 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:26.296810 kubelet[2550]: E0113 21:16:26.296508 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:27.225060 kubelet[2550]: E0113 21:16:27.225027 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:28.465613 kubelet[2550]: I0113 21:16:28.465582 2550 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:16:28.466414 containerd[1436]: time="2025-01-13T21:16:28.466296224Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:16:28.466746 kubelet[2550]: I0113 21:16:28.466522 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:16:29.400874 kubelet[2550]: I0113 21:16:29.400553 2550 topology_manager.go:215] "Topology Admit Handler" podUID="5890e4b5-c178-4ad5-83e4-c1b9aa455868" podNamespace="kube-system" podName="kube-proxy-gzspt" Jan 13 21:16:29.407168 kubelet[2550]: I0113 21:16:29.405802 2550 topology_manager.go:215] "Topology Admit Handler" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" podNamespace="kube-system" podName="cilium-pf5ng" Jan 13 21:16:29.414609 systemd[1]: Created slice kubepods-besteffort-pod5890e4b5_c178_4ad5_83e4_c1b9aa455868.slice - libcontainer container kubepods-besteffort-pod5890e4b5_c178_4ad5_83e4_c1b9aa455868.slice. Jan 13 21:16:29.430483 systemd[1]: Created slice kubepods-burstable-pod8eb5cb4c_c40a_49d8_a34a_74d6ad2db5ef.slice - libcontainer container kubepods-burstable-pod8eb5cb4c_c40a_49d8_a34a_74d6ad2db5ef.slice. Jan 13 21:16:29.477274 kubelet[2550]: I0113 21:16:29.477226 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-bpf-maps\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.477274 kubelet[2550]: I0113 21:16:29.477279 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-cgroup\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.477791 kubelet[2550]: I0113 21:16:29.477304 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fbdf\" (UniqueName: \"kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-kube-api-access-8fbdf\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.477791 kubelet[2550]: I0113 21:16:29.477331 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cni-path\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478193 kubelet[2550]: I0113 21:16:29.478171 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-xtables-lock\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478239 kubelet[2550]: I0113 21:16:29.478204 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-net\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478239 kubelet[2550]: I0113 21:16:29.478237 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5890e4b5-c178-4ad5-83e4-c1b9aa455868-kube-proxy\") pod \"kube-proxy-gzspt\" (UID: \"5890e4b5-c178-4ad5-83e4-c1b9aa455868\") " pod="kube-system/kube-proxy-gzspt" Jan 13 21:16:29.478322 kubelet[2550]: I0113 21:16:29.478294 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-etc-cni-netd\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478378 kubelet[2550]: I0113 21:16:29.478368 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-clustermesh-secrets\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478408 kubelet[2550]: I0113 21:16:29.478395 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znpkf\" (UniqueName: \"kubernetes.io/projected/5890e4b5-c178-4ad5-83e4-c1b9aa455868-kube-api-access-znpkf\") pod \"kube-proxy-gzspt\" (UID: \"5890e4b5-c178-4ad5-83e4-c1b9aa455868\") " pod="kube-system/kube-proxy-gzspt" Jan 13 21:16:29.478429 kubelet[2550]: I0113 21:16:29.478418 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hubble-tls\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478473 kubelet[2550]: I0113 21:16:29.478452 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5890e4b5-c178-4ad5-83e4-c1b9aa455868-lib-modules\") pod \"kube-proxy-gzspt\" (UID: \"5890e4b5-c178-4ad5-83e4-c1b9aa455868\") " pod="kube-system/kube-proxy-gzspt" Jan 13 21:16:29.478516 kubelet[2550]: I0113 21:16:29.478506 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-run\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478546 kubelet[2550]: I0113 21:16:29.478531 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-lib-modules\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478585 kubelet[2550]: I0113 21:16:29.478551 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-kernel\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478605 kubelet[2550]: I0113 21:16:29.478591 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5890e4b5-c178-4ad5-83e4-c1b9aa455868-xtables-lock\") pod \"kube-proxy-gzspt\" (UID: \"5890e4b5-c178-4ad5-83e4-c1b9aa455868\") " pod="kube-system/kube-proxy-gzspt" Jan 13 21:16:29.478633 kubelet[2550]: I0113 21:16:29.478616 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hostproc\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.478666 kubelet[2550]: I0113 21:16:29.478637 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-config-path\") pod \"cilium-pf5ng\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " pod="kube-system/cilium-pf5ng" Jan 13 21:16:29.611511 kubelet[2550]: I0113 21:16:29.611449 2550 topology_manager.go:215] "Topology Admit Handler" podUID="6feae5d7-e742-490b-9e01-f2498138266d" podNamespace="kube-system" podName="cilium-operator-5cc964979-2swwp" Jan 13 21:16:29.617598 systemd[1]: Created slice kubepods-besteffort-pod6feae5d7_e742_490b_9e01_f2498138266d.slice - libcontainer container kubepods-besteffort-pod6feae5d7_e742_490b_9e01_f2498138266d.slice. Jan 13 21:16:29.625864 kubelet[2550]: E0113 21:16:29.625818 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:29.680468 kubelet[2550]: I0113 21:16:29.680358 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6feae5d7-e742-490b-9e01-f2498138266d-cilium-config-path\") pod \"cilium-operator-5cc964979-2swwp\" (UID: \"6feae5d7-e742-490b-9e01-f2498138266d\") " pod="kube-system/cilium-operator-5cc964979-2swwp" Jan 13 21:16:29.680582 kubelet[2550]: I0113 21:16:29.680449 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtp5\" (UniqueName: \"kubernetes.io/projected/6feae5d7-e742-490b-9e01-f2498138266d-kube-api-access-6gtp5\") pod \"cilium-operator-5cc964979-2swwp\" (UID: \"6feae5d7-e742-490b-9e01-f2498138266d\") " pod="kube-system/cilium-operator-5cc964979-2swwp" Jan 13 21:16:29.728097 kubelet[2550]: E0113 21:16:29.728056 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:29.733141 kubelet[2550]: E0113 21:16:29.733020 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:29.733591 containerd[1436]: time="2025-01-13T21:16:29.733357856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzspt,Uid:5890e4b5-c178-4ad5-83e4-c1b9aa455868,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:29.733591 containerd[1436]: time="2025-01-13T21:16:29.733415497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf5ng,Uid:8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:29.758567 containerd[1436]: time="2025-01-13T21:16:29.758341854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:29.758567 containerd[1436]: time="2025-01-13T21:16:29.758414135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:29.758567 containerd[1436]: time="2025-01-13T21:16:29.758441695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:29.758716 containerd[1436]: time="2025-01-13T21:16:29.758509936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:29.758716 containerd[1436]: time="2025-01-13T21:16:29.758628017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:29.758716 containerd[1436]: time="2025-01-13T21:16:29.758644178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:29.758797 containerd[1436]: time="2025-01-13T21:16:29.758726539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:29.759255 containerd[1436]: time="2025-01-13T21:16:29.758535896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:29.780341 systemd[1]: Started cri-containerd-17c890c3352bd612237b300796330f9b55d90f86331a91638b815b2d3160e5e0.scope - libcontainer container 17c890c3352bd612237b300796330f9b55d90f86331a91638b815b2d3160e5e0. Jan 13 21:16:29.786720 systemd[1]: Started cri-containerd-950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf.scope - libcontainer container 950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf. Jan 13 21:16:29.812712 containerd[1436]: time="2025-01-13T21:16:29.812649219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf5ng,Uid:8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\"" Jan 13 21:16:29.813791 kubelet[2550]: E0113 21:16:29.813444 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:29.814130 containerd[1436]: time="2025-01-13T21:16:29.814088035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzspt,Uid:5890e4b5-c178-4ad5-83e4-c1b9aa455868,Namespace:kube-system,Attempt:0,} returns sandbox id \"17c890c3352bd612237b300796330f9b55d90f86331a91638b815b2d3160e5e0\"" Jan 13 21:16:29.814710 kubelet[2550]: E0113 21:16:29.814693 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:29.817025 containerd[1436]: time="2025-01-13T21:16:29.816896827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:16:29.820623 containerd[1436]: time="2025-01-13T21:16:29.820521267Z" level=info msg="CreateContainer within sandbox \"17c890c3352bd612237b300796330f9b55d90f86331a91638b815b2d3160e5e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:16:29.837787 containerd[1436]: time="2025-01-13T21:16:29.837732179Z" level=info msg="CreateContainer within sandbox \"17c890c3352bd612237b300796330f9b55d90f86331a91638b815b2d3160e5e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f2dd671966eb1ec32f559bb93d6076e27b059bffa2f174ae25f3533c57f43c5\"" Jan 13 21:16:29.838507 containerd[1436]: time="2025-01-13T21:16:29.838457627Z" level=info msg="StartContainer for \"3f2dd671966eb1ec32f559bb93d6076e27b059bffa2f174ae25f3533c57f43c5\"" Jan 13 21:16:29.866299 systemd[1]: Started cri-containerd-3f2dd671966eb1ec32f559bb93d6076e27b059bffa2f174ae25f3533c57f43c5.scope - libcontainer container 3f2dd671966eb1ec32f559bb93d6076e27b059bffa2f174ae25f3533c57f43c5. Jan 13 21:16:29.892373 containerd[1436]: time="2025-01-13T21:16:29.892326987Z" level=info msg="StartContainer for \"3f2dd671966eb1ec32f559bb93d6076e27b059bffa2f174ae25f3533c57f43c5\" returns successfully" Jan 13 21:16:29.921605 kubelet[2550]: E0113 21:16:29.921573 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:29.922224 containerd[1436]: time="2025-01-13T21:16:29.922183519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-2swwp,Uid:6feae5d7-e742-490b-9e01-f2498138266d,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:29.949211 containerd[1436]: time="2025-01-13T21:16:29.948932777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:29.949211 containerd[1436]: time="2025-01-13T21:16:29.948999498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:29.949211 containerd[1436]: time="2025-01-13T21:16:29.949015298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:29.949211 containerd[1436]: time="2025-01-13T21:16:29.949091499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:29.977318 systemd[1]: Started cri-containerd-b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603.scope - libcontainer container b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603. Jan 13 21:16:30.021337 containerd[1436]: time="2025-01-13T21:16:30.021249209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-2swwp,Uid:6feae5d7-e742-490b-9e01-f2498138266d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603\"" Jan 13 21:16:30.023196 kubelet[2550]: E0113 21:16:30.023168 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:30.231903 kubelet[2550]: E0113 21:16:30.231449 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:30.240083 kubelet[2550]: I0113 21:16:30.240044 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gzspt" podStartSLOduration=1.240006574 podStartE2EDuration="1.240006574s" podCreationTimestamp="2025-01-13 21:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:30.239931013 +0000 UTC m=+14.137595111" watchObservedRunningTime="2025-01-13 21:16:30.240006574 +0000 UTC m=+14.137670672" Jan 13 21:16:32.731125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251573659.mount: Deactivated successfully. Jan 13 21:16:37.920243 containerd[1436]: time="2025-01-13T21:16:37.920193807Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:37.921069 containerd[1436]: time="2025-01-13T21:16:37.920895372Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650942" Jan 13 21:16:37.921765 containerd[1436]: time="2025-01-13T21:16:37.921711897Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:37.923943 containerd[1436]: time="2025-01-13T21:16:37.923330668Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.106387201s" Jan 13 21:16:37.923943 containerd[1436]: time="2025-01-13T21:16:37.923367028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:16:37.928750 containerd[1436]: time="2025-01-13T21:16:37.928723304Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:16:37.932616 containerd[1436]: time="2025-01-13T21:16:37.932586090Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:16:37.953667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724974447.mount: Deactivated successfully. Jan 13 21:16:37.954343 containerd[1436]: time="2025-01-13T21:16:37.953752790Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\"" Jan 13 21:16:37.955121 containerd[1436]: time="2025-01-13T21:16:37.955044119Z" level=info msg="StartContainer for \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\"" Jan 13 21:16:37.990277 systemd[1]: Started cri-containerd-6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9.scope - libcontainer container 6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9. Jan 13 21:16:38.017295 containerd[1436]: time="2025-01-13T21:16:38.017260606Z" level=info msg="StartContainer for \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\" returns successfully" Jan 13 21:16:38.095989 systemd[1]: cri-containerd-6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9.scope: Deactivated successfully. Jan 13 21:16:38.273582 kubelet[2550]: E0113 21:16:38.273444 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:38.286444 containerd[1436]: time="2025-01-13T21:16:38.282593419Z" level=info msg="shim disconnected" id=6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9 namespace=k8s.io Jan 13 21:16:38.286548 containerd[1436]: time="2025-01-13T21:16:38.286451763Z" level=warning msg="cleaning up after shim disconnected" id=6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9 namespace=k8s.io Jan 13 21:16:38.286548 containerd[1436]: time="2025-01-13T21:16:38.286464523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:38.950751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9-rootfs.mount: Deactivated successfully. Jan 13 21:16:39.255372 kubelet[2550]: E0113 21:16:39.255153 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:39.258888 containerd[1436]: time="2025-01-13T21:16:39.258791282Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:16:39.278276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375642275.mount: Deactivated successfully. Jan 13 21:16:39.283756 containerd[1436]: time="2025-01-13T21:16:39.283709588Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\"" Jan 13 21:16:39.284425 containerd[1436]: time="2025-01-13T21:16:39.284386952Z" level=info msg="StartContainer for \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\"" Jan 13 21:16:39.311388 systemd[1]: Started cri-containerd-3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6.scope - libcontainer container 3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6. Jan 13 21:16:39.338382 containerd[1436]: time="2025-01-13T21:16:39.338334187Z" level=info msg="StartContainer for \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\" returns successfully" Jan 13 21:16:39.373887 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:16:39.374187 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:39.374254 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:16:39.381833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:16:39.382017 systemd[1]: cri-containerd-3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6.scope: Deactivated successfully. Jan 13 21:16:39.417330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:16:39.454039 containerd[1436]: time="2025-01-13T21:16:39.453920142Z" level=info msg="shim disconnected" id=3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6 namespace=k8s.io Jan 13 21:16:39.454039 containerd[1436]: time="2025-01-13T21:16:39.454005903Z" level=warning msg="cleaning up after shim disconnected" id=3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6 namespace=k8s.io Jan 13 21:16:39.454039 containerd[1436]: time="2025-01-13T21:16:39.454018943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:39.498474 containerd[1436]: time="2025-01-13T21:16:39.498096800Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:39.499187 containerd[1436]: time="2025-01-13T21:16:39.499163447Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282" Jan 13 21:16:39.500084 containerd[1436]: time="2025-01-13T21:16:39.500058532Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:16:39.501799 containerd[1436]: time="2025-01-13T21:16:39.501618141Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.572861397s" Jan 13 21:16:39.501799 containerd[1436]: time="2025-01-13T21:16:39.501660261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:16:39.504286 containerd[1436]: time="2025-01-13T21:16:39.504257076Z" level=info msg="CreateContainer within sandbox \"b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:16:39.517672 containerd[1436]: time="2025-01-13T21:16:39.517577594Z" level=info msg="CreateContainer within sandbox \"b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\"" Jan 13 21:16:39.518084 containerd[1436]: time="2025-01-13T21:16:39.518015837Z" level=info msg="StartContainer for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\"" Jan 13 21:16:39.546272 systemd[1]: Started cri-containerd-985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee.scope - libcontainer container 985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee. Jan 13 21:16:39.569142 containerd[1436]: time="2025-01-13T21:16:39.569068655Z" level=info msg="StartContainer for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" returns successfully" Jan 13 21:16:39.952171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6-rootfs.mount: Deactivated successfully. Jan 13 21:16:40.259308 kubelet[2550]: E0113 21:16:40.259059 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:40.263240 kubelet[2550]: E0113 21:16:40.262304 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:40.264623 containerd[1436]: time="2025-01-13T21:16:40.264320741Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:16:40.295480 kubelet[2550]: I0113 21:16:40.295428 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-2swwp" podStartSLOduration=1.817436486 podStartE2EDuration="11.295385631s" podCreationTimestamp="2025-01-13 21:16:29 +0000 UTC" firstStartedPulling="2025-01-13 21:16:30.023905717 +0000 UTC m=+13.921569815" lastFinishedPulling="2025-01-13 21:16:39.501854862 +0000 UTC m=+23.399518960" observedRunningTime="2025-01-13 21:16:40.273839513 +0000 UTC m=+24.171503611" watchObservedRunningTime="2025-01-13 21:16:40.295385631 +0000 UTC m=+24.193049729" Jan 13 21:16:40.306593 containerd[1436]: time="2025-01-13T21:16:40.306544572Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\"" Jan 13 21:16:40.307172 containerd[1436]: time="2025-01-13T21:16:40.307140735Z" level=info msg="StartContainer for \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\"" Jan 13 21:16:40.360541 systemd[1]: Started cri-containerd-ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9.scope - libcontainer container ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9. Jan 13 21:16:40.399436 containerd[1436]: time="2025-01-13T21:16:40.397677991Z" level=info msg="StartContainer for \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\" returns successfully" Jan 13 21:16:40.400452 systemd[1]: cri-containerd-ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9.scope: Deactivated successfully. Jan 13 21:16:40.418164 containerd[1436]: time="2025-01-13T21:16:40.418067623Z" level=info msg="shim disconnected" id=ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9 namespace=k8s.io Jan 13 21:16:40.418164 containerd[1436]: time="2025-01-13T21:16:40.418135103Z" level=warning msg="cleaning up after shim disconnected" id=ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9 namespace=k8s.io Jan 13 21:16:40.418164 containerd[1436]: time="2025-01-13T21:16:40.418144343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:40.426906 containerd[1436]: time="2025-01-13T21:16:40.426857391Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:16:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:16:40.951009 systemd[1]: run-containerd-runc-k8s.io-ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9-runc.BVYm2F.mount: Deactivated successfully. Jan 13 21:16:40.951121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9-rootfs.mount: Deactivated successfully. Jan 13 21:16:41.269686 kubelet[2550]: E0113 21:16:41.268825 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:41.269686 kubelet[2550]: E0113 21:16:41.269228 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:41.272341 containerd[1436]: time="2025-01-13T21:16:41.271722406Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:16:41.288271 containerd[1436]: time="2025-01-13T21:16:41.287418247Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\"" Jan 13 21:16:41.288453 containerd[1436]: time="2025-01-13T21:16:41.288409612Z" level=info msg="StartContainer for \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\"" Jan 13 21:16:41.323296 systemd[1]: Started cri-containerd-9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262.scope - libcontainer container 9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262. Jan 13 21:16:41.346701 systemd[1]: cri-containerd-9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262.scope: Deactivated successfully. Jan 13 21:16:41.351743 containerd[1436]: time="2025-01-13T21:16:41.351559696Z" level=info msg="StartContainer for \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\" returns successfully" Jan 13 21:16:41.352294 containerd[1436]: time="2025-01-13T21:16:41.352183979Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eb5cb4c_c40a_49d8_a34a_74d6ad2db5ef.slice/cri-containerd-9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262.scope/memory.events\": no such file or directory" Jan 13 21:16:41.368652 containerd[1436]: time="2025-01-13T21:16:41.368593743Z" level=info msg="shim disconnected" id=9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262 namespace=k8s.io Jan 13 21:16:41.368804 containerd[1436]: time="2025-01-13T21:16:41.368656584Z" level=warning msg="cleaning up after shim disconnected" id=9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262 namespace=k8s.io Jan 13 21:16:41.368804 containerd[1436]: time="2025-01-13T21:16:41.368665544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:41.951045 systemd[1]: run-containerd-runc-k8s.io-9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262-runc.DJkT6l.mount: Deactivated successfully. Jan 13 21:16:41.951153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262-rootfs.mount: Deactivated successfully. Jan 13 21:16:42.272404 kubelet[2550]: E0113 21:16:42.272300 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:42.276736 containerd[1436]: time="2025-01-13T21:16:42.274391267Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:16:42.295663 containerd[1436]: time="2025-01-13T21:16:42.295614729Z" level=info msg="CreateContainer within sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\"" Jan 13 21:16:42.297683 containerd[1436]: time="2025-01-13T21:16:42.297606219Z" level=info msg="StartContainer for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\"" Jan 13 21:16:42.324308 systemd[1]: Started cri-containerd-bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583.scope - libcontainer container bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583. Jan 13 21:16:42.351860 containerd[1436]: time="2025-01-13T21:16:42.350996436Z" level=info msg="StartContainer for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" returns successfully" Jan 13 21:16:42.543469 kubelet[2550]: I0113 21:16:42.543153 2550 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:16:42.575375 kubelet[2550]: I0113 21:16:42.575332 2550 topology_manager.go:215] "Topology Admit Handler" podUID="4c36928a-db3b-48ad-ad30-ce02113ae8f9" podNamespace="kube-system" podName="coredns-76f75df574-7vrwv" Jan 13 21:16:42.580014 kubelet[2550]: I0113 21:16:42.579984 2550 topology_manager.go:215] "Topology Admit Handler" podUID="9725b41d-0ee0-448d-b93a-9d90e5519e0b" podNamespace="kube-system" podName="coredns-76f75df574-b6hnf" Jan 13 21:16:42.588788 systemd[1]: Created slice kubepods-burstable-pod4c36928a_db3b_48ad_ad30_ce02113ae8f9.slice - libcontainer container kubepods-burstable-pod4c36928a_db3b_48ad_ad30_ce02113ae8f9.slice. Jan 13 21:16:42.594922 systemd[1]: Created slice kubepods-burstable-pod9725b41d_0ee0_448d_b93a_9d90e5519e0b.slice - libcontainer container kubepods-burstable-pod9725b41d_0ee0_448d_b93a_9d90e5519e0b.slice. Jan 13 21:16:42.675861 kubelet[2550]: I0113 21:16:42.675809 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4qg\" (UniqueName: \"kubernetes.io/projected/4c36928a-db3b-48ad-ad30-ce02113ae8f9-kube-api-access-jr4qg\") pod \"coredns-76f75df574-7vrwv\" (UID: \"4c36928a-db3b-48ad-ad30-ce02113ae8f9\") " pod="kube-system/coredns-76f75df574-7vrwv" Jan 13 21:16:42.675861 kubelet[2550]: I0113 21:16:42.675857 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b425l\" (UniqueName: \"kubernetes.io/projected/9725b41d-0ee0-448d-b93a-9d90e5519e0b-kube-api-access-b425l\") pod \"coredns-76f75df574-b6hnf\" (UID: \"9725b41d-0ee0-448d-b93a-9d90e5519e0b\") " pod="kube-system/coredns-76f75df574-b6hnf" Jan 13 21:16:42.676005 kubelet[2550]: I0113 21:16:42.675882 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c36928a-db3b-48ad-ad30-ce02113ae8f9-config-volume\") pod \"coredns-76f75df574-7vrwv\" (UID: \"4c36928a-db3b-48ad-ad30-ce02113ae8f9\") " pod="kube-system/coredns-76f75df574-7vrwv" Jan 13 21:16:42.676005 kubelet[2550]: I0113 21:16:42.675902 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9725b41d-0ee0-448d-b93a-9d90e5519e0b-config-volume\") pod \"coredns-76f75df574-b6hnf\" (UID: \"9725b41d-0ee0-448d-b93a-9d90e5519e0b\") " pod="kube-system/coredns-76f75df574-b6hnf" Jan 13 21:16:42.891864 kubelet[2550]: E0113 21:16:42.891821 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:42.892672 containerd[1436]: time="2025-01-13T21:16:42.892626203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7vrwv,Uid:4c36928a-db3b-48ad-ad30-ce02113ae8f9,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:42.897340 kubelet[2550]: E0113 21:16:42.897314 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:42.897862 containerd[1436]: time="2025-01-13T21:16:42.897832268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b6hnf,Uid:9725b41d-0ee0-448d-b93a-9d90e5519e0b,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:43.276495 kubelet[2550]: E0113 21:16:43.276390 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:43.295661 kubelet[2550]: I0113 21:16:43.295414 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pf5ng" podStartSLOduration=6.184526333 podStartE2EDuration="14.295363253s" podCreationTimestamp="2025-01-13 21:16:29 +0000 UTC" firstStartedPulling="2025-01-13 21:16:29.814865004 +0000 UTC m=+13.712529102" lastFinishedPulling="2025-01-13 21:16:37.925701924 +0000 UTC m=+21.823366022" observedRunningTime="2025-01-13 21:16:43.29231932 +0000 UTC m=+27.189983418" watchObservedRunningTime="2025-01-13 21:16:43.295363253 +0000 UTC m=+27.193027311" Jan 13 21:16:44.278330 kubelet[2550]: E0113 21:16:44.278287 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:44.516003 systemd-networkd[1373]: cilium_host: Link UP Jan 13 21:16:44.516147 systemd-networkd[1373]: cilium_net: Link UP Jan 13 21:16:44.516150 systemd-networkd[1373]: cilium_net: Gained carrier Jan 13 21:16:44.516347 systemd-networkd[1373]: cilium_host: Gained carrier Jan 13 21:16:44.516462 systemd-networkd[1373]: cilium_net: Gained IPv6LL Jan 13 21:16:44.585931 systemd-networkd[1373]: cilium_vxlan: Link UP Jan 13 21:16:44.586445 systemd-networkd[1373]: cilium_vxlan: Gained carrier Jan 13 21:16:44.881135 kernel: NET: Registered PF_ALG protocol family Jan 13 21:16:45.279737 kubelet[2550]: E0113 21:16:45.279606 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:45.485671 systemd-networkd[1373]: lxc_health: Link UP Jan 13 21:16:45.489381 systemd-networkd[1373]: cilium_host: Gained IPv6LL Jan 13 21:16:45.489644 systemd-networkd[1373]: lxc_health: Gained carrier Jan 13 21:16:45.742267 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Jan 13 21:16:45.969271 systemd-networkd[1373]: lxcb477163f4620: Link UP Jan 13 21:16:45.974142 kernel: eth0: renamed from tmp3d3f0 Jan 13 21:16:45.974225 systemd-networkd[1373]: lxc2b2898d7ecd3: Link UP Jan 13 21:16:45.987178 kernel: eth0: renamed from tmp80a25 Jan 13 21:16:45.995449 systemd-networkd[1373]: lxc2b2898d7ecd3: Gained carrier Jan 13 21:16:45.995696 systemd-networkd[1373]: lxcb477163f4620: Gained carrier Jan 13 21:16:46.280900 kubelet[2550]: E0113 21:16:46.280799 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:46.574267 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jan 13 21:16:46.586262 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:49724.service - OpenSSH per-connection server daemon (10.0.0.1:49724). Jan 13 21:16:46.632367 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 49724 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:16:46.633731 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:16:46.639579 systemd-logind[1415]: New session 8 of user core. Jan 13 21:16:46.650264 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:16:46.777292 sshd[3771]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:46.780479 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:49724.service: Deactivated successfully. Jan 13 21:16:46.782743 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:16:46.783649 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:16:46.784711 systemd-logind[1415]: Removed session 8. Jan 13 21:16:47.663230 systemd-networkd[1373]: lxcb477163f4620: Gained IPv6LL Jan 13 21:16:48.046268 systemd-networkd[1373]: lxc2b2898d7ecd3: Gained IPv6LL Jan 13 21:16:49.529729 containerd[1436]: time="2025-01-13T21:16:49.529627168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:49.529729 containerd[1436]: time="2025-01-13T21:16:49.529700809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:49.529729 containerd[1436]: time="2025-01-13T21:16:49.529713049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:49.530250 containerd[1436]: time="2025-01-13T21:16:49.529866249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:49.538015 containerd[1436]: time="2025-01-13T21:16:49.537736833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:49.538015 containerd[1436]: time="2025-01-13T21:16:49.537799833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:49.538015 containerd[1436]: time="2025-01-13T21:16:49.537810994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:49.538332 containerd[1436]: time="2025-01-13T21:16:49.538167275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:49.551299 systemd[1]: Started cri-containerd-80a251b16f8e77bc478fec4bb266205b1f20e384524f54ae90ad19bcd949948e.scope - libcontainer container 80a251b16f8e77bc478fec4bb266205b1f20e384524f54ae90ad19bcd949948e. Jan 13 21:16:49.555653 systemd[1]: Started cri-containerd-3d3f061059e0ac33e13308707f0fa38bde60f7a5eff5a7da31f90541fb7a0f41.scope - libcontainer container 3d3f061059e0ac33e13308707f0fa38bde60f7a5eff5a7da31f90541fb7a0f41. Jan 13 21:16:49.566324 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:16:49.570373 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:16:49.583565 containerd[1436]: time="2025-01-13T21:16:49.583529734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b6hnf,Uid:9725b41d-0ee0-448d-b93a-9d90e5519e0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"80a251b16f8e77bc478fec4bb266205b1f20e384524f54ae90ad19bcd949948e\"" Jan 13 21:16:49.586125 kubelet[2550]: E0113 21:16:49.586067 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:49.590908 containerd[1436]: time="2025-01-13T21:16:49.590678996Z" level=info msg="CreateContainer within sandbox \"80a251b16f8e77bc478fec4bb266205b1f20e384524f54ae90ad19bcd949948e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:16:49.597851 containerd[1436]: time="2025-01-13T21:16:49.597763777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7vrwv,Uid:4c36928a-db3b-48ad-ad30-ce02113ae8f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d3f061059e0ac33e13308707f0fa38bde60f7a5eff5a7da31f90541fb7a0f41\"" Jan 13 21:16:49.598440 kubelet[2550]: E0113 21:16:49.598418 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:49.601147 containerd[1436]: time="2025-01-13T21:16:49.601055267Z" level=info msg="CreateContainer within sandbox \"3d3f061059e0ac33e13308707f0fa38bde60f7a5eff5a7da31f90541fb7a0f41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:16:49.604765 containerd[1436]: time="2025-01-13T21:16:49.604712439Z" level=info msg="CreateContainer within sandbox \"80a251b16f8e77bc478fec4bb266205b1f20e384524f54ae90ad19bcd949948e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86b49e8ee73491ee6cc7af3579b89b6a96585db7b4af12b69b6bbff628187642\"" Jan 13 21:16:49.606212 containerd[1436]: time="2025-01-13T21:16:49.606181083Z" level=info msg="StartContainer for \"86b49e8ee73491ee6cc7af3579b89b6a96585db7b4af12b69b6bbff628187642\"" Jan 13 21:16:49.612739 containerd[1436]: time="2025-01-13T21:16:49.612698103Z" level=info msg="CreateContainer within sandbox \"3d3f061059e0ac33e13308707f0fa38bde60f7a5eff5a7da31f90541fb7a0f41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"183f61af45a57d4cd888f3677405fbd5a778c913ec67d20ad356a431ee5e489e\"" Jan 13 21:16:49.613967 containerd[1436]: time="2025-01-13T21:16:49.613388185Z" level=info msg="StartContainer for \"183f61af45a57d4cd888f3677405fbd5a778c913ec67d20ad356a431ee5e489e\"" Jan 13 21:16:49.630266 systemd[1]: Started cri-containerd-86b49e8ee73491ee6cc7af3579b89b6a96585db7b4af12b69b6bbff628187642.scope - libcontainer container 86b49e8ee73491ee6cc7af3579b89b6a96585db7b4af12b69b6bbff628187642. Jan 13 21:16:49.648270 systemd[1]: Started cri-containerd-183f61af45a57d4cd888f3677405fbd5a778c913ec67d20ad356a431ee5e489e.scope - libcontainer container 183f61af45a57d4cd888f3677405fbd5a778c913ec67d20ad356a431ee5e489e. Jan 13 21:16:49.685601 containerd[1436]: time="2025-01-13T21:16:49.685517326Z" level=info msg="StartContainer for \"86b49e8ee73491ee6cc7af3579b89b6a96585db7b4af12b69b6bbff628187642\" returns successfully" Jan 13 21:16:49.685730 containerd[1436]: time="2025-01-13T21:16:49.685546606Z" level=info msg="StartContainer for \"183f61af45a57d4cd888f3677405fbd5a778c913ec67d20ad356a431ee5e489e\" returns successfully" Jan 13 21:16:50.291465 kubelet[2550]: E0113 21:16:50.291421 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:50.294279 kubelet[2550]: E0113 21:16:50.294245 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:50.312390 kubelet[2550]: I0113 21:16:50.312301 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7vrwv" podStartSLOduration=21.312266387 podStartE2EDuration="21.312266387s" podCreationTimestamp="2025-01-13 21:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:50.312002266 +0000 UTC m=+34.209666364" watchObservedRunningTime="2025-01-13 21:16:50.312266387 +0000 UTC m=+34.209930485" Jan 13 21:16:50.312390 kubelet[2550]: I0113 21:16:50.312379 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b6hnf" podStartSLOduration=21.312363307 podStartE2EDuration="21.312363307s" podCreationTimestamp="2025-01-13 21:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:50.299605191 +0000 UTC m=+34.197269289" watchObservedRunningTime="2025-01-13 21:16:50.312363307 +0000 UTC m=+34.210027405" Jan 13 21:16:51.295601 kubelet[2550]: E0113 21:16:51.295571 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:51.295960 kubelet[2550]: E0113 21:16:51.295579 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:51.792141 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:49732.service - OpenSSH per-connection server daemon (10.0.0.1:49732). Jan 13 21:16:51.830722 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 49732 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:16:51.832521 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:16:51.836592 systemd-logind[1415]: New session 9 of user core. Jan 13 21:16:51.848317 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:16:51.964302 sshd[3964]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:51.967676 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:49732.service: Deactivated successfully. Jan 13 21:16:51.969493 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:16:51.970132 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:16:51.971042 systemd-logind[1415]: Removed session 9. Jan 13 21:16:52.297395 kubelet[2550]: E0113 21:16:52.297315 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:16:56.981990 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:41326.service - OpenSSH per-connection server daemon (10.0.0.1:41326). Jan 13 21:16:57.021210 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 41326 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:16:57.023121 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:16:57.027718 systemd-logind[1415]: New session 10 of user core. Jan 13 21:16:57.038331 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:16:57.159281 sshd[3982]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:57.164786 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:41326.service: Deactivated successfully. Jan 13 21:16:57.166432 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:16:57.168119 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:16:57.169933 systemd-logind[1415]: Removed session 10. Jan 13 21:16:59.350291 kubelet[2550]: I0113 21:16:59.350194 2550 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:16:59.351215 kubelet[2550]: E0113 21:16:59.351190 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:00.329245 kubelet[2550]: E0113 21:17:00.329157 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:02.172610 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:41340.service - OpenSSH per-connection server daemon (10.0.0.1:41340). Jan 13 21:17:02.210221 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 41340 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:02.211657 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:02.215389 systemd-logind[1415]: New session 11 of user core. Jan 13 21:17:02.229300 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:17:02.346161 sshd[3999]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:02.349713 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:41340.service: Deactivated successfully. Jan 13 21:17:02.351474 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:17:02.353363 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:17:02.354276 systemd-logind[1415]: Removed session 11. Jan 13 21:17:07.357177 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:54650.service - OpenSSH per-connection server daemon (10.0.0.1:54650). Jan 13 21:17:07.392265 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 54650 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:07.393609 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:07.397461 systemd-logind[1415]: New session 12 of user core. Jan 13 21:17:07.408319 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:17:07.518487 sshd[4015]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:07.521851 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:54650.service: Deactivated successfully. Jan 13 21:17:07.524780 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:17:07.525505 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:17:07.526728 systemd-logind[1415]: Removed session 12. Jan 13 21:17:12.529441 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:50040.service - OpenSSH per-connection server daemon (10.0.0.1:50040). Jan 13 21:17:12.564069 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 50040 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:12.565416 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:12.569516 systemd-logind[1415]: New session 13 of user core. Jan 13 21:17:12.580380 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:17:12.694219 sshd[4030]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:12.698398 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:50040.service: Deactivated successfully. Jan 13 21:17:12.701288 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:17:12.702809 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:17:12.703825 systemd-logind[1415]: Removed session 13. Jan 13 21:17:17.704997 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:50048.service - OpenSSH per-connection server daemon (10.0.0.1:50048). Jan 13 21:17:17.740906 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 50048 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:17.741796 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:17.746063 systemd-logind[1415]: New session 14 of user core. Jan 13 21:17:17.756309 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:17:17.870543 sshd[4047]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:17.873688 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:50048.service: Deactivated successfully. Jan 13 21:17:17.876436 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:17:17.877237 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:17:17.878133 systemd-logind[1415]: Removed session 14. Jan 13 21:17:22.897021 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:54828.service - OpenSSH per-connection server daemon (10.0.0.1:54828). Jan 13 21:17:22.936636 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 54828 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:22.938055 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:22.941822 systemd-logind[1415]: New session 15 of user core. Jan 13 21:17:22.957262 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:17:23.064178 sshd[4064]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:23.073572 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:54828.service: Deactivated successfully. Jan 13 21:17:23.074945 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:17:23.076325 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:17:23.080463 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). Jan 13 21:17:23.082189 systemd-logind[1415]: Removed session 15. Jan 13 21:17:23.112689 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:23.113880 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:23.117301 systemd-logind[1415]: New session 16 of user core. Jan 13 21:17:23.123249 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:17:23.263462 sshd[4080]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:23.270304 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:54830.service: Deactivated successfully. Jan 13 21:17:23.273221 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:17:23.275202 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:17:23.285068 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:54840.service - OpenSSH per-connection server daemon (10.0.0.1:54840). Jan 13 21:17:23.286285 systemd-logind[1415]: Removed session 16. Jan 13 21:17:23.320486 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:23.322015 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:23.325635 systemd-logind[1415]: New session 17 of user core. Jan 13 21:17:23.339253 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:17:23.448177 sshd[4092]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:23.451436 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:54840.service: Deactivated successfully. Jan 13 21:17:23.453060 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:17:23.453690 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:17:23.454572 systemd-logind[1415]: Removed session 17. Jan 13 21:17:28.459606 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:54856.service - OpenSSH per-connection server daemon (10.0.0.1:54856). Jan 13 21:17:28.511074 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 54856 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:28.512587 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:28.516361 systemd-logind[1415]: New session 18 of user core. Jan 13 21:17:28.522272 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:17:28.635212 sshd[4108]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:28.638857 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:54856.service: Deactivated successfully. Jan 13 21:17:28.640514 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:17:28.641044 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:17:28.642050 systemd-logind[1415]: Removed session 18. Jan 13 21:17:31.202738 kubelet[2550]: E0113 21:17:31.202599 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:33.202946 kubelet[2550]: E0113 21:17:33.202465 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:33.650686 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:38880.service - OpenSSH per-connection server daemon (10.0.0.1:38880). Jan 13 21:17:33.687217 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 38880 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:33.688830 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:33.692680 systemd-logind[1415]: New session 19 of user core. Jan 13 21:17:33.703240 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:17:33.812330 sshd[4125]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:33.823102 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:38880.service: Deactivated successfully. Jan 13 21:17:33.825213 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:17:33.827337 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:17:33.832392 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:38882.service - OpenSSH per-connection server daemon (10.0.0.1:38882). Jan 13 21:17:33.833497 systemd-logind[1415]: Removed session 19. Jan 13 21:17:33.863372 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 38882 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:33.864546 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:33.867872 systemd-logind[1415]: New session 20 of user core. Jan 13 21:17:33.874248 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:17:34.081322 sshd[4140]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:34.089543 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:38882.service: Deactivated successfully. Jan 13 21:17:34.092014 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:17:34.093704 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:17:34.095240 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:38886.service - OpenSSH per-connection server daemon (10.0.0.1:38886). Jan 13 21:17:34.100166 systemd-logind[1415]: Removed session 20. Jan 13 21:17:34.135126 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 38886 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:34.136417 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:34.140167 systemd-logind[1415]: New session 21 of user core. Jan 13 21:17:34.146245 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:17:35.406818 sshd[4152]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:35.418541 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:38886.service: Deactivated successfully. Jan 13 21:17:35.420050 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:17:35.424570 systemd-logind[1415]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:17:35.431386 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:38890.service - OpenSSH per-connection server daemon (10.0.0.1:38890). Jan 13 21:17:35.432602 systemd-logind[1415]: Removed session 21. Jan 13 21:17:35.462039 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 38890 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:35.463239 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:35.466939 systemd-logind[1415]: New session 22 of user core. Jan 13 21:17:35.472277 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:17:35.693313 sshd[4174]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:35.703697 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:38890.service: Deactivated successfully. Jan 13 21:17:35.705035 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:17:35.709360 systemd-logind[1415]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:17:35.718381 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:38904.service - OpenSSH per-connection server daemon (10.0.0.1:38904). Jan 13 21:17:35.719290 systemd-logind[1415]: Removed session 22. Jan 13 21:17:35.748719 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 38904 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:35.749986 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:35.754180 systemd-logind[1415]: New session 23 of user core. Jan 13 21:17:35.759261 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:17:35.872982 sshd[4186]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:35.876154 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:38904.service: Deactivated successfully. Jan 13 21:17:35.877986 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:17:35.879414 systemd-logind[1415]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:17:35.880603 systemd-logind[1415]: Removed session 23. Jan 13 21:17:36.203018 kubelet[2550]: E0113 21:17:36.202927 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:37.202599 kubelet[2550]: E0113 21:17:37.202569 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:40.883952 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:38906.service - OpenSSH per-connection server daemon (10.0.0.1:38906). Jan 13 21:17:40.918315 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 38906 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:40.919601 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:40.924129 systemd-logind[1415]: New session 24 of user core. Jan 13 21:17:40.935259 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:17:41.045679 sshd[4203]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:41.049614 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:38906.service: Deactivated successfully. Jan 13 21:17:41.051390 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:17:41.053659 systemd-logind[1415]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:17:41.054440 systemd-logind[1415]: Removed session 24. Jan 13 21:17:46.056200 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:60720.service - OpenSSH per-connection server daemon (10.0.0.1:60720). Jan 13 21:17:46.095048 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 60720 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:46.096680 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:46.101181 systemd-logind[1415]: New session 25 of user core. Jan 13 21:17:46.113276 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:17:46.236866 sshd[4217]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:46.240688 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:60720.service: Deactivated successfully. Jan 13 21:17:46.243229 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:17:46.245583 systemd-logind[1415]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:17:46.246978 systemd-logind[1415]: Removed session 25. Jan 13 21:17:51.247834 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:60728.service - OpenSSH per-connection server daemon (10.0.0.1:60728). Jan 13 21:17:51.282537 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 60728 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:51.283729 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:51.287413 systemd-logind[1415]: New session 26 of user core. Jan 13 21:17:51.294252 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:17:51.402184 sshd[4231]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:51.410529 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:60728.service: Deactivated successfully. Jan 13 21:17:51.412075 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:17:51.415393 systemd-logind[1415]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:17:51.429372 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:60732.service - OpenSSH per-connection server daemon (10.0.0.1:60732). Jan 13 21:17:51.430396 systemd-logind[1415]: Removed session 26. Jan 13 21:17:51.460608 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:51.461777 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:51.465858 systemd-logind[1415]: New session 27 of user core. Jan 13 21:17:51.472240 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:17:54.595776 containerd[1436]: time="2025-01-13T21:17:54.595608880Z" level=info msg="StopContainer for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" with timeout 30 (s)" Jan 13 21:17:54.597489 containerd[1436]: time="2025-01-13T21:17:54.597464072Z" level=info msg="Stop container \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" with signal terminated" Jan 13 21:17:54.610011 systemd[1]: cri-containerd-985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee.scope: Deactivated successfully. Jan 13 21:17:54.619462 containerd[1436]: time="2025-01-13T21:17:54.619415972Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:17:54.627730 containerd[1436]: time="2025-01-13T21:17:54.627545913Z" level=info msg="StopContainer for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" with timeout 2 (s)" Jan 13 21:17:54.628319 containerd[1436]: time="2025-01-13T21:17:54.628233525Z" level=info msg="Stop container \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" with signal terminated" Jan 13 21:17:54.631880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee-rootfs.mount: Deactivated successfully. Jan 13 21:17:54.634739 systemd-networkd[1373]: lxc_health: Link DOWN Jan 13 21:17:54.634749 systemd-networkd[1373]: lxc_health: Lost carrier Jan 13 21:17:54.639683 containerd[1436]: time="2025-01-13T21:17:54.639423919Z" level=info msg="shim disconnected" id=985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee namespace=k8s.io Jan 13 21:17:54.639923 containerd[1436]: time="2025-01-13T21:17:54.639620043Z" level=warning msg="cleaning up after shim disconnected" id=985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee namespace=k8s.io Jan 13 21:17:54.639923 containerd[1436]: time="2025-01-13T21:17:54.639739645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:17:54.660980 systemd[1]: cri-containerd-bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583.scope: Deactivated successfully. Jan 13 21:17:54.661256 systemd[1]: cri-containerd-bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583.scope: Consumed 6.496s CPU time. Jan 13 21:17:54.678639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583-rootfs.mount: Deactivated successfully. Jan 13 21:17:54.684830 containerd[1436]: time="2025-01-13T21:17:54.684788866Z" level=info msg="StopContainer for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" returns successfully" Jan 13 21:17:54.685223 containerd[1436]: time="2025-01-13T21:17:54.684995149Z" level=info msg="shim disconnected" id=bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583 namespace=k8s.io Jan 13 21:17:54.685223 containerd[1436]: time="2025-01-13T21:17:54.685037270Z" level=warning msg="cleaning up after shim disconnected" id=bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583 namespace=k8s.io Jan 13 21:17:54.685223 containerd[1436]: time="2025-01-13T21:17:54.685045630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:17:54.685791 containerd[1436]: time="2025-01-13T21:17:54.685742042Z" level=info msg="StopPodSandbox for \"b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603\"" Jan 13 21:17:54.685791 containerd[1436]: time="2025-01-13T21:17:54.685782843Z" level=info msg="Container to stop \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:17:54.690491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603-shm.mount: Deactivated successfully. Jan 13 21:17:54.697939 systemd[1]: cri-containerd-b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603.scope: Deactivated successfully. Jan 13 21:17:54.703484 containerd[1436]: time="2025-01-13T21:17:54.703435989Z" level=info msg="StopContainer for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" returns successfully" Jan 13 21:17:54.703939 containerd[1436]: time="2025-01-13T21:17:54.703902997Z" level=info msg="StopPodSandbox for \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\"" Jan 13 21:17:54.703997 containerd[1436]: time="2025-01-13T21:17:54.703953718Z" level=info msg="Container to stop \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:17:54.703997 containerd[1436]: time="2025-01-13T21:17:54.703966838Z" level=info msg="Container to stop \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:17:54.703997 containerd[1436]: time="2025-01-13T21:17:54.703976799Z" level=info msg="Container to stop \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:17:54.703997 containerd[1436]: time="2025-01-13T21:17:54.703986879Z" level=info msg="Container to stop \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:17:54.704136 containerd[1436]: time="2025-01-13T21:17:54.703996199Z" level=info msg="Container to stop \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:17:54.706421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf-shm.mount: Deactivated successfully. Jan 13 21:17:54.714455 systemd[1]: cri-containerd-950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf.scope: Deactivated successfully. Jan 13 21:17:54.729259 containerd[1436]: time="2025-01-13T21:17:54.729199436Z" level=info msg="shim disconnected" id=b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603 namespace=k8s.io Jan 13 21:17:54.729259 containerd[1436]: time="2025-01-13T21:17:54.729254437Z" level=warning msg="cleaning up after shim disconnected" id=b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603 namespace=k8s.io Jan 13 21:17:54.729259 containerd[1436]: time="2025-01-13T21:17:54.729265757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:17:54.733592 containerd[1436]: time="2025-01-13T21:17:54.733512591Z" level=info msg="shim disconnected" id=950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf namespace=k8s.io Jan 13 21:17:54.733592 containerd[1436]: time="2025-01-13T21:17:54.733567032Z" level=warning msg="cleaning up after shim disconnected" id=950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf namespace=k8s.io Jan 13 21:17:54.733592 containerd[1436]: time="2025-01-13T21:17:54.733575512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:17:54.743495 containerd[1436]: time="2025-01-13T21:17:54.743434363Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:17:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:17:54.747447 containerd[1436]: time="2025-01-13T21:17:54.747396631Z" level=info msg="TearDown network for sandbox \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" successfully" Jan 13 21:17:54.747447 containerd[1436]: time="2025-01-13T21:17:54.747429272Z" level=info msg="StopPodSandbox for \"950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf\" returns successfully" Jan 13 21:17:54.769576 containerd[1436]: time="2025-01-13T21:17:54.769456134Z" level=info msg="TearDown network for sandbox \"b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603\" successfully" Jan 13 21:17:54.769576 containerd[1436]: time="2025-01-13T21:17:54.769491734Z" level=info msg="StopPodSandbox for \"b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603\" returns successfully" Jan 13 21:17:54.804266 kubelet[2550]: I0113 21:17:54.803893 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-lib-modules\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.804266 kubelet[2550]: I0113 21:17:54.803937 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-run\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.804266 kubelet[2550]: I0113 21:17:54.803962 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gtp5\" (UniqueName: \"kubernetes.io/projected/6feae5d7-e742-490b-9e01-f2498138266d-kube-api-access-6gtp5\") pod \"6feae5d7-e742-490b-9e01-f2498138266d\" (UID: \"6feae5d7-e742-490b-9e01-f2498138266d\") " Jan 13 21:17:54.804266 kubelet[2550]: I0113 21:17:54.803983 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-bpf-maps\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.804266 kubelet[2550]: I0113 21:17:54.804020 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cni-path\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.804266 kubelet[2550]: I0113 21:17:54.804038 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-xtables-lock\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805489 kubelet[2550]: I0113 21:17:54.804055 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-net\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805489 kubelet[2550]: I0113 21:17:54.804071 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-etc-cni-netd\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805489 kubelet[2550]: I0113 21:17:54.804091 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-config-path\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805489 kubelet[2550]: I0113 21:17:54.804140 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6feae5d7-e742-490b-9e01-f2498138266d-cilium-config-path\") pod \"6feae5d7-e742-490b-9e01-f2498138266d\" (UID: \"6feae5d7-e742-490b-9e01-f2498138266d\") " Jan 13 21:17:54.805489 kubelet[2550]: I0113 21:17:54.804160 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-cgroup\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805489 kubelet[2550]: I0113 21:17:54.804187 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fbdf\" (UniqueName: \"kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-kube-api-access-8fbdf\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805617 kubelet[2550]: I0113 21:17:54.804206 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hostproc\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805617 kubelet[2550]: I0113 21:17:54.804224 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-kernel\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805617 kubelet[2550]: I0113 21:17:54.804244 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-clustermesh-secrets\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.805617 kubelet[2550]: I0113 21:17:54.804261 2550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hubble-tls\") pod \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\" (UID: \"8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef\") " Jan 13 21:17:54.809232 kubelet[2550]: I0113 21:17:54.807913 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.809232 kubelet[2550]: I0113 21:17:54.807940 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.809232 kubelet[2550]: I0113 21:17:54.808995 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cni-path" (OuterVolumeSpecName: "cni-path") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.810379 kubelet[2550]: I0113 21:17:54.810341 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6feae5d7-e742-490b-9e01-f2498138266d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6feae5d7-e742-490b-9e01-f2498138266d" (UID: "6feae5d7-e742-490b-9e01-f2498138266d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:17:54.810451 kubelet[2550]: I0113 21:17:54.810392 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.810451 kubelet[2550]: I0113 21:17:54.810421 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.810451 kubelet[2550]: I0113 21:17:54.810437 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.812297 kubelet[2550]: I0113 21:17:54.812264 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:17:54.812363 kubelet[2550]: I0113 21:17:54.812313 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.812417 kubelet[2550]: I0113 21:17:54.812382 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:17:54.812449 kubelet[2550]: I0113 21:17:54.812429 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hostproc" (OuterVolumeSpecName: "hostproc") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.812473 kubelet[2550]: I0113 21:17:54.812456 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.812473 kubelet[2550]: I0113 21:17:54.807912 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:17:54.812581 kubelet[2550]: I0113 21:17:54.812557 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6feae5d7-e742-490b-9e01-f2498138266d-kube-api-access-6gtp5" (OuterVolumeSpecName: "kube-api-access-6gtp5") pod "6feae5d7-e742-490b-9e01-f2498138266d" (UID: "6feae5d7-e742-490b-9e01-f2498138266d"). InnerVolumeSpecName "kube-api-access-6gtp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:17:54.812835 kubelet[2550]: I0113 21:17:54.812813 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-kube-api-access-8fbdf" (OuterVolumeSpecName: "kube-api-access-8fbdf") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "kube-api-access-8fbdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:17:54.814183 kubelet[2550]: I0113 21:17:54.814073 2550 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" (UID: "8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:17:54.904499 kubelet[2550]: I0113 21:17:54.904446 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904499 kubelet[2550]: I0113 21:17:54.904503 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6feae5d7-e742-490b-9e01-f2498138266d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904523 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904541 2550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8fbdf\" (UniqueName: \"kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-kube-api-access-8fbdf\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904550 2550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904560 2550 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904569 2550 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904578 2550 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904587 2550 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904648 kubelet[2550]: I0113 21:17:54.904596 2550 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904827 kubelet[2550]: I0113 21:17:54.904604 2550 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904827 kubelet[2550]: I0113 21:17:54.904615 2550 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904827 kubelet[2550]: I0113 21:17:54.904624 2550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6gtp5\" (UniqueName: \"kubernetes.io/projected/6feae5d7-e742-490b-9e01-f2498138266d-kube-api-access-6gtp5\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904827 kubelet[2550]: I0113 21:17:54.904633 2550 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904827 kubelet[2550]: I0113 21:17:54.904643 2550 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:54.904827 kubelet[2550]: I0113 21:17:54.904653 2550 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 21:17:55.432809 systemd[1]: Removed slice kubepods-besteffort-pod6feae5d7_e742_490b_9e01_f2498138266d.slice - libcontainer container kubepods-besteffort-pod6feae5d7_e742_490b_9e01_f2498138266d.slice. Jan 13 21:17:55.438487 systemd[1]: Removed slice kubepods-burstable-pod8eb5cb4c_c40a_49d8_a34a_74d6ad2db5ef.slice - libcontainer container kubepods-burstable-pod8eb5cb4c_c40a_49d8_a34a_74d6ad2db5ef.slice. Jan 13 21:17:55.438582 systemd[1]: kubepods-burstable-pod8eb5cb4c_c40a_49d8_a34a_74d6ad2db5ef.slice: Consumed 6.684s CPU time. Jan 13 21:17:55.439287 kubelet[2550]: I0113 21:17:55.439264 2550 scope.go:117] "RemoveContainer" containerID="985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee" Jan 13 21:17:55.441426 containerd[1436]: time="2025-01-13T21:17:55.441384499Z" level=info msg="RemoveContainer for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\"" Jan 13 21:17:55.444086 containerd[1436]: time="2025-01-13T21:17:55.443821820Z" level=info msg="RemoveContainer for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" returns successfully" Jan 13 21:17:55.444195 kubelet[2550]: I0113 21:17:55.444081 2550 scope.go:117] "RemoveContainer" containerID="985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee" Jan 13 21:17:55.444966 containerd[1436]: time="2025-01-13T21:17:55.444917359Z" level=error msg="ContainerStatus for \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\": not found" Jan 13 21:17:55.447792 kubelet[2550]: E0113 21:17:55.447756 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\": not found" containerID="985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee" Jan 13 21:17:55.450934 kubelet[2550]: I0113 21:17:55.450903 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee"} err="failed to get container status \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\": rpc error: code = NotFound desc = an error occurred when try to find container \"985028419523e13b051d220c809f135f48625d0c1270fa56f28a7697eb8d3fee\": not found" Jan 13 21:17:55.451004 kubelet[2550]: I0113 21:17:55.450944 2550 scope.go:117] "RemoveContainer" containerID="bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583" Jan 13 21:17:55.453205 containerd[1436]: time="2025-01-13T21:17:55.453177179Z" level=info msg="RemoveContainer for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\"" Jan 13 21:17:55.455923 containerd[1436]: time="2025-01-13T21:17:55.455891225Z" level=info msg="RemoveContainer for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" returns successfully" Jan 13 21:17:55.456098 kubelet[2550]: I0113 21:17:55.456067 2550 scope.go:117] "RemoveContainer" containerID="9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262" Jan 13 21:17:55.457227 containerd[1436]: time="2025-01-13T21:17:55.457191967Z" level=info msg="RemoveContainer for \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\"" Jan 13 21:17:55.459918 containerd[1436]: time="2025-01-13T21:17:55.459877892Z" level=info msg="RemoveContainer for \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\" returns successfully" Jan 13 21:17:55.460076 kubelet[2550]: I0113 21:17:55.460052 2550 scope.go:117] "RemoveContainer" containerID="ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9" Jan 13 21:17:55.461306 containerd[1436]: time="2025-01-13T21:17:55.461088193Z" level=info msg="RemoveContainer for \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\"" Jan 13 21:17:55.470528 containerd[1436]: time="2025-01-13T21:17:55.470490872Z" level=info msg="RemoveContainer for \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\" returns successfully" Jan 13 21:17:55.470767 kubelet[2550]: I0113 21:17:55.470748 2550 scope.go:117] "RemoveContainer" containerID="3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6" Jan 13 21:17:55.471695 containerd[1436]: time="2025-01-13T21:17:55.471662652Z" level=info msg="RemoveContainer for \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\"" Jan 13 21:17:55.480172 containerd[1436]: time="2025-01-13T21:17:55.480137756Z" level=info msg="RemoveContainer for \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\" returns successfully" Jan 13 21:17:55.480428 kubelet[2550]: I0113 21:17:55.480398 2550 scope.go:117] "RemoveContainer" containerID="6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9" Jan 13 21:17:55.481390 containerd[1436]: time="2025-01-13T21:17:55.481361577Z" level=info msg="RemoveContainer for \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\"" Jan 13 21:17:55.483460 containerd[1436]: time="2025-01-13T21:17:55.483423612Z" level=info msg="RemoveContainer for \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\" returns successfully" Jan 13 21:17:55.483633 kubelet[2550]: I0113 21:17:55.483601 2550 scope.go:117] "RemoveContainer" containerID="bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583" Jan 13 21:17:55.483851 containerd[1436]: time="2025-01-13T21:17:55.483807218Z" level=error msg="ContainerStatus for \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\": not found" Jan 13 21:17:55.483967 kubelet[2550]: E0113 21:17:55.483937 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\": not found" containerID="bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583" Jan 13 21:17:55.484012 kubelet[2550]: I0113 21:17:55.484001 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583"} err="failed to get container status \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc3fba9909f9bf15c8da648cf87137fa68a7a2ffd045f9e7e0f7acf2772c8583\": not found" Jan 13 21:17:55.484039 kubelet[2550]: I0113 21:17:55.484015 2550 scope.go:117] "RemoveContainer" containerID="9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262" Jan 13 21:17:55.484209 containerd[1436]: time="2025-01-13T21:17:55.484173225Z" level=error msg="ContainerStatus for \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\": not found" Jan 13 21:17:55.484306 kubelet[2550]: E0113 21:17:55.484291 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\": not found" containerID="9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262" Jan 13 21:17:55.484335 kubelet[2550]: I0113 21:17:55.484320 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262"} err="failed to get container status \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\": rpc error: code = NotFound desc = an error occurred when try to find container \"9695e19bbf6e237e8f80ed60b04a261c319553020498a36ce12469146dcb6262\": not found" Jan 13 21:17:55.484335 kubelet[2550]: I0113 21:17:55.484330 2550 scope.go:117] "RemoveContainer" containerID="ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9" Jan 13 21:17:55.484556 containerd[1436]: time="2025-01-13T21:17:55.484515670Z" level=error msg="ContainerStatus for \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\": not found" Jan 13 21:17:55.484642 kubelet[2550]: E0113 21:17:55.484628 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\": not found" containerID="ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9" Jan 13 21:17:55.484668 kubelet[2550]: I0113 21:17:55.484661 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9"} err="failed to get container status \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff2c6921b7830a60d34a73e11345ea3c2d2616a285b1485917eb59ced4d2e6c9\": not found" Jan 13 21:17:55.484692 kubelet[2550]: I0113 21:17:55.484672 2550 scope.go:117] "RemoveContainer" containerID="3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6" Jan 13 21:17:55.484902 containerd[1436]: time="2025-01-13T21:17:55.484866716Z" level=error msg="ContainerStatus for \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\": not found" Jan 13 21:17:55.485027 kubelet[2550]: E0113 21:17:55.485010 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\": not found" containerID="3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6" Jan 13 21:17:55.485057 kubelet[2550]: I0113 21:17:55.485043 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6"} err="failed to get container status \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3488488f991f599752975a0eeadd1f272e19afe98188f98a52bae5bc5d6bb2c6\": not found" Jan 13 21:17:55.485057 kubelet[2550]: I0113 21:17:55.485053 2550 scope.go:117] "RemoveContainer" containerID="6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9" Jan 13 21:17:55.485265 containerd[1436]: time="2025-01-13T21:17:55.485233203Z" level=error msg="ContainerStatus for \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\": not found" Jan 13 21:17:55.485367 kubelet[2550]: E0113 21:17:55.485352 2550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\": not found" containerID="6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9" Jan 13 21:17:55.485395 kubelet[2550]: I0113 21:17:55.485380 2550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9"} err="failed to get container status \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6217f522af18ffd9c009ceb762a132a4623ce075c29c414ed6caeed804d7edf9\": not found" Jan 13 21:17:55.602534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3529bcc428a57a522af6b9bbfaa897d57d878229bdf5bc104c5125991688603-rootfs.mount: Deactivated successfully. Jan 13 21:17:55.602637 systemd[1]: var-lib-kubelet-pods-6feae5d7\x2de742\x2d490b\x2d9e01\x2df2498138266d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6gtp5.mount: Deactivated successfully. Jan 13 21:17:55.602697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-950cfad71bf3bbb1259df2ae3cfbab961c9f8d6015c90aa3a5c16480802cefcf-rootfs.mount: Deactivated successfully. Jan 13 21:17:55.602760 systemd[1]: var-lib-kubelet-pods-8eb5cb4c\x2dc40a\x2d49d8\x2da34a\x2d74d6ad2db5ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fbdf.mount: Deactivated successfully. Jan 13 21:17:55.602818 systemd[1]: var-lib-kubelet-pods-8eb5cb4c\x2dc40a\x2d49d8\x2da34a\x2d74d6ad2db5ef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:17:55.602876 systemd[1]: var-lib-kubelet-pods-8eb5cb4c\x2dc40a\x2d49d8\x2da34a\x2d74d6ad2db5ef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:17:56.207006 kubelet[2550]: I0113 21:17:56.206961 2550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6feae5d7-e742-490b-9e01-f2498138266d" path="/var/lib/kubelet/pods/6feae5d7-e742-490b-9e01-f2498138266d/volumes" Jan 13 21:17:56.207499 kubelet[2550]: I0113 21:17:56.207482 2550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" path="/var/lib/kubelet/pods/8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef/volumes" Jan 13 21:17:56.247558 kubelet[2550]: E0113 21:17:56.247534 2550 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:17:56.561510 sshd[4245]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:56.572586 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:60732.service: Deactivated successfully. Jan 13 21:17:56.574297 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:17:56.574458 systemd[1]: session-27.scope: Consumed 2.469s CPU time. Jan 13 21:17:56.575517 systemd-logind[1415]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:17:56.585355 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:32778.service - OpenSSH per-connection server daemon (10.0.0.1:32778). Jan 13 21:17:56.586480 systemd-logind[1415]: Removed session 27. Jan 13 21:17:56.615880 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 32778 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:56.617033 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:56.620523 systemd-logind[1415]: New session 28 of user core. Jan 13 21:17:56.632231 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:17:58.162211 kubelet[2550]: I0113 21:17:58.161329 2550 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:17:58Z","lastTransitionTime":"2025-01-13T21:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:17:58.509435 sshd[4406]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:58.522066 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:32778.service: Deactivated successfully. Jan 13 21:17:58.523755 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:17:58.523912 systemd[1]: session-28.scope: Consumed 1.811s CPU time. Jan 13 21:17:58.525436 systemd-logind[1415]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:17:58.534380 systemd[1]: Started sshd@28-10.0.0.59:22-10.0.0.1:32782.service - OpenSSH per-connection server daemon (10.0.0.1:32782). Jan 13 21:17:58.536045 systemd-logind[1415]: Removed session 28. Jan 13 21:17:58.577176 kubelet[2550]: I0113 21:17:58.577098 2550 topology_manager.go:215] "Topology Admit Handler" podUID="3fec26d9-98e9-4583-ac6c-34729f77ebaf" podNamespace="kube-system" podName="cilium-5sjjf" Jan 13 21:17:58.577308 kubelet[2550]: E0113 21:17:58.577199 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" containerName="apply-sysctl-overwrites" Jan 13 21:17:58.577308 kubelet[2550]: E0113 21:17:58.577213 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" containerName="clean-cilium-state" Jan 13 21:17:58.577308 kubelet[2550]: E0113 21:17:58.577220 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" containerName="cilium-agent" Jan 13 21:17:58.577308 kubelet[2550]: E0113 21:17:58.577227 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" containerName="mount-cgroup" Jan 13 21:17:58.577308 kubelet[2550]: E0113 21:17:58.577234 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6feae5d7-e742-490b-9e01-f2498138266d" containerName="cilium-operator" Jan 13 21:17:58.577308 kubelet[2550]: E0113 21:17:58.577240 2550 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" containerName="mount-bpf-fs" Jan 13 21:17:58.578630 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 32782 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:58.580180 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:58.583878 kubelet[2550]: I0113 21:17:58.583589 2550 memory_manager.go:354] "RemoveStaleState removing state" podUID="6feae5d7-e742-490b-9e01-f2498138266d" containerName="cilium-operator" Jan 13 21:17:58.583878 kubelet[2550]: I0113 21:17:58.583632 2550 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb5cb4c-c40a-49d8-a34a-74d6ad2db5ef" containerName="cilium-agent" Jan 13 21:17:58.589339 systemd-logind[1415]: New session 29 of user core. Jan 13 21:17:58.599306 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:17:58.608424 systemd[1]: Created slice kubepods-burstable-pod3fec26d9_98e9_4583_ac6c_34729f77ebaf.slice - libcontainer container kubepods-burstable-pod3fec26d9_98e9_4583_ac6c_34729f77ebaf.slice. Jan 13 21:17:58.622270 kubelet[2550]: I0113 21:17:58.622238 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-xtables-lock\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622513 kubelet[2550]: I0113 21:17:58.622468 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fec26d9-98e9-4583-ac6c-34729f77ebaf-cilium-config-path\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622558 kubelet[2550]: I0113 21:17:58.622518 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-hostproc\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622558 kubelet[2550]: I0113 21:17:58.622540 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-etc-cni-netd\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622600 kubelet[2550]: I0113 21:17:58.622560 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bggn\" (UniqueName: \"kubernetes.io/projected/3fec26d9-98e9-4583-ac6c-34729f77ebaf-kube-api-access-4bggn\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622600 kubelet[2550]: I0113 21:17:58.622583 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3fec26d9-98e9-4583-ac6c-34729f77ebaf-cilium-ipsec-secrets\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622600 kubelet[2550]: I0113 21:17:58.622601 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fec26d9-98e9-4583-ac6c-34729f77ebaf-hubble-tls\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622673 kubelet[2550]: I0113 21:17:58.622622 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-bpf-maps\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622673 kubelet[2550]: I0113 21:17:58.622640 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-cilium-cgroup\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622673 kubelet[2550]: I0113 21:17:58.622661 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-cni-path\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622743 kubelet[2550]: I0113 21:17:58.622680 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-host-proc-sys-kernel\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622743 kubelet[2550]: I0113 21:17:58.622700 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-cilium-run\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622743 kubelet[2550]: I0113 21:17:58.622718 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-lib-modules\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622743 kubelet[2550]: I0113 21:17:58.622744 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fec26d9-98e9-4583-ac6c-34729f77ebaf-host-proc-sys-net\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.622830 kubelet[2550]: I0113 21:17:58.622768 2550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fec26d9-98e9-4583-ac6c-34729f77ebaf-clustermesh-secrets\") pod \"cilium-5sjjf\" (UID: \"3fec26d9-98e9-4583-ac6c-34729f77ebaf\") " pod="kube-system/cilium-5sjjf" Jan 13 21:17:58.653570 sshd[4420]: pam_unix(sshd:session): session closed for user core Jan 13 21:17:58.667596 systemd[1]: sshd@28-10.0.0.59:22-10.0.0.1:32782.service: Deactivated successfully. Jan 13 21:17:58.669024 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:17:58.671594 systemd-logind[1415]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:17:58.676383 systemd[1]: Started sshd@29-10.0.0.59:22-10.0.0.1:32786.service - OpenSSH per-connection server daemon (10.0.0.1:32786). Jan 13 21:17:58.678258 systemd-logind[1415]: Removed session 29. Jan 13 21:17:58.707616 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 32786 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:17:58.708913 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:17:58.712786 systemd-logind[1415]: New session 30 of user core. Jan 13 21:17:58.725586 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:17:58.912154 kubelet[2550]: E0113 21:17:58.911841 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:58.912464 containerd[1436]: time="2025-01-13T21:17:58.912424338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5sjjf,Uid:3fec26d9-98e9-4583-ac6c-34729f77ebaf,Namespace:kube-system,Attempt:0,}" Jan 13 21:17:58.929205 containerd[1436]: time="2025-01-13T21:17:58.928560835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:17:58.929205 containerd[1436]: time="2025-01-13T21:17:58.928640476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:17:58.929205 containerd[1436]: time="2025-01-13T21:17:58.928652196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:17:58.929205 containerd[1436]: time="2025-01-13T21:17:58.928720357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:17:58.949283 systemd[1]: Started cri-containerd-0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec.scope - libcontainer container 0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec. Jan 13 21:17:58.972315 containerd[1436]: time="2025-01-13T21:17:58.972177288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5sjjf,Uid:3fec26d9-98e9-4583-ac6c-34729f77ebaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\"" Jan 13 21:17:58.973015 kubelet[2550]: E0113 21:17:58.972966 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:58.976025 containerd[1436]: time="2025-01-13T21:17:58.975227937Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:17:58.985718 containerd[1436]: time="2025-01-13T21:17:58.985670663Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531\"" Jan 13 21:17:58.987266 containerd[1436]: time="2025-01-13T21:17:58.986246792Z" level=info msg="StartContainer for \"fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531\"" Jan 13 21:17:59.011272 systemd[1]: Started cri-containerd-fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531.scope - libcontainer container fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531. Jan 13 21:17:59.032069 containerd[1436]: time="2025-01-13T21:17:59.032030950Z" level=info msg="StartContainer for \"fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531\" returns successfully" Jan 13 21:17:59.055710 systemd[1]: cri-containerd-fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531.scope: Deactivated successfully. Jan 13 21:17:59.094945 containerd[1436]: time="2025-01-13T21:17:59.094793768Z" level=info msg="shim disconnected" id=fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531 namespace=k8s.io Jan 13 21:17:59.094945 containerd[1436]: time="2025-01-13T21:17:59.094847849Z" level=warning msg="cleaning up after shim disconnected" id=fbb965a7b86160c38c6804bf031fa976acb6d0b67bb5ec841d7e135c1efb2531 namespace=k8s.io Jan 13 21:17:59.094945 containerd[1436]: time="2025-01-13T21:17:59.094856729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:17:59.442693 kubelet[2550]: E0113 21:17:59.442639 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:17:59.452028 containerd[1436]: time="2025-01-13T21:17:59.451885970Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:17:59.460918 containerd[1436]: time="2025-01-13T21:17:59.460864190Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba\"" Jan 13 21:17:59.461463 containerd[1436]: time="2025-01-13T21:17:59.461357838Z" level=info msg="StartContainer for \"3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba\"" Jan 13 21:17:59.485274 systemd[1]: Started cri-containerd-3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba.scope - libcontainer container 3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba. Jan 13 21:17:59.504282 containerd[1436]: time="2025-01-13T21:17:59.504241026Z" level=info msg="StartContainer for \"3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba\" returns successfully" Jan 13 21:17:59.510409 systemd[1]: cri-containerd-3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba.scope: Deactivated successfully. Jan 13 21:17:59.528713 containerd[1436]: time="2025-01-13T21:17:59.528654686Z" level=info msg="shim disconnected" id=3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba namespace=k8s.io Jan 13 21:17:59.528713 containerd[1436]: time="2025-01-13T21:17:59.528709447Z" level=warning msg="cleaning up after shim disconnected" id=3a294480e32824506e9298b28f6ed11b91dcbd02506ff062dddb1968046373ba namespace=k8s.io Jan 13 21:17:59.528713 containerd[1436]: time="2025-01-13T21:17:59.528719647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:18:00.445957 kubelet[2550]: E0113 21:18:00.445786 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:00.447624 containerd[1436]: time="2025-01-13T21:18:00.447565697Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:18:00.465013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285023422.mount: Deactivated successfully. Jan 13 21:18:00.466564 containerd[1436]: time="2025-01-13T21:18:00.466446185Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1\"" Jan 13 21:18:00.468073 containerd[1436]: time="2025-01-13T21:18:00.467159636Z" level=info msg="StartContainer for \"94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1\"" Jan 13 21:18:00.505269 systemd[1]: Started cri-containerd-94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1.scope - libcontainer container 94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1. Jan 13 21:18:00.526781 systemd[1]: cri-containerd-94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1.scope: Deactivated successfully. Jan 13 21:18:00.527961 containerd[1436]: time="2025-01-13T21:18:00.527924403Z" level=info msg="StartContainer for \"94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1\" returns successfully" Jan 13 21:18:00.548159 containerd[1436]: time="2025-01-13T21:18:00.548031630Z" level=info msg="shim disconnected" id=94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1 namespace=k8s.io Jan 13 21:18:00.548159 containerd[1436]: time="2025-01-13T21:18:00.548085231Z" level=warning msg="cleaning up after shim disconnected" id=94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1 namespace=k8s.io Jan 13 21:18:00.548159 containerd[1436]: time="2025-01-13T21:18:00.548095151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:18:00.727220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94319f2bc242f07edf91335a012dee3d1c98da19ebbd342c2922dbd6ea23ddd1-rootfs.mount: Deactivated successfully. Jan 13 21:18:01.249166 kubelet[2550]: E0113 21:18:01.249135 2550 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:18:01.449509 kubelet[2550]: E0113 21:18:01.449482 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:01.451762 containerd[1436]: time="2025-01-13T21:18:01.451710118Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:18:01.462852 containerd[1436]: time="2025-01-13T21:18:01.462793684Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05\"" Jan 13 21:18:01.463369 containerd[1436]: time="2025-01-13T21:18:01.463324492Z" level=info msg="StartContainer for \"5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05\"" Jan 13 21:18:01.495268 systemd[1]: Started cri-containerd-5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05.scope - libcontainer container 5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05. Jan 13 21:18:01.514930 systemd[1]: cri-containerd-5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05.scope: Deactivated successfully. Jan 13 21:18:01.517735 containerd[1436]: time="2025-01-13T21:18:01.517694384Z" level=info msg="StartContainer for \"5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05\" returns successfully" Jan 13 21:18:01.534980 containerd[1436]: time="2025-01-13T21:18:01.534783360Z" level=info msg="shim disconnected" id=5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05 namespace=k8s.io Jan 13 21:18:01.534980 containerd[1436]: time="2025-01-13T21:18:01.534833601Z" level=warning msg="cleaning up after shim disconnected" id=5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05 namespace=k8s.io Jan 13 21:18:01.534980 containerd[1436]: time="2025-01-13T21:18:01.534847721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:18:01.727273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ca8aec33a4552ffdba43c6e190c300aed758dd4308d743a865fade78e60ae05-rootfs.mount: Deactivated successfully. Jan 13 21:18:02.455428 kubelet[2550]: E0113 21:18:02.455393 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:02.464801 containerd[1436]: time="2025-01-13T21:18:02.463360701Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:18:02.474386 containerd[1436]: time="2025-01-13T21:18:02.474339742Z" level=info msg="CreateContainer within sandbox \"0000bef69650a9e05528afc0d4a2b831bdc76a242cb33bbf1a4d10cbe25131ec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db\"" Jan 13 21:18:02.475830 containerd[1436]: time="2025-01-13T21:18:02.474947990Z" level=info msg="StartContainer for \"4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db\"" Jan 13 21:18:02.505063 systemd[1]: run-containerd-runc-k8s.io-4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db-runc.jDC1Tf.mount: Deactivated successfully. Jan 13 21:18:02.517275 systemd[1]: Started cri-containerd-4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db.scope - libcontainer container 4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db. Jan 13 21:18:02.543193 containerd[1436]: time="2025-01-13T21:18:02.543155149Z" level=info msg="StartContainer for \"4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db\" returns successfully" Jan 13 21:18:02.824630 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:18:03.459896 kubelet[2550]: E0113 21:18:03.459861 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:03.477154 kubelet[2550]: I0113 21:18:03.476776 2550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5sjjf" podStartSLOduration=5.476739805 podStartE2EDuration="5.476739805s" podCreationTimestamp="2025-01-13 21:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:18:03.474880139 +0000 UTC m=+107.372544197" watchObservedRunningTime="2025-01-13 21:18:03.476739805 +0000 UTC m=+107.374403903" Jan 13 21:18:04.914691 kubelet[2550]: E0113 21:18:04.913687 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:05.627677 systemd-networkd[1373]: lxc_health: Link UP Jan 13 21:18:05.628214 systemd-networkd[1373]: lxc_health: Gained carrier Jan 13 21:18:06.202679 kubelet[2550]: E0113 21:18:06.201788 2550 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-b6hnf" podUID="9725b41d-0ee0-448d-b93a-9d90e5519e0b" Jan 13 21:18:06.915398 kubelet[2550]: E0113 21:18:06.914931 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:06.958513 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jan 13 21:18:07.468812 kubelet[2550]: E0113 21:18:07.468780 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:08.203968 kubelet[2550]: E0113 21:18:08.202210 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:08.470228 kubelet[2550]: E0113 21:18:08.469986 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:11.480311 systemd[1]: run-containerd-runc-k8s.io-4bba08794e7c949770d156fe09f7de82a763e8a9957662378dc0da1aa996a0db-runc.lVaAmZ.mount: Deactivated successfully. Jan 13 21:18:11.523002 sshd[4430]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:11.527283 systemd[1]: sshd@29-10.0.0.59:22-10.0.0.1:32786.service: Deactivated successfully. Jan 13 21:18:11.529024 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:18:11.531665 systemd-logind[1415]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:18:11.532512 systemd-logind[1415]: Removed session 30. Jan 13 21:18:12.204031 kubelet[2550]: E0113 21:18:12.203632 2550 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"