Jan 13 20:16:38.943146 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:38.943165 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:16:38.943174 kernel: KASLR enabled Jan 13 20:16:38.943180 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:38.943186 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:16:38.943191 kernel: random: crng init done Jan 13 20:16:38.943198 kernel: secureboot: Secure boot disabled Jan 13 20:16:38.943204 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:38.943210 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:16:38.943217 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:38.943223 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943229 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943235 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943241 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943248 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943255 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943262 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943268 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943274 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:38.943280 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:16:38.943286 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:38.943293 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:16:38.943299 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 13 20:16:38.943305 kernel: Zone ranges: Jan 13 20:16:38.943311 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:16:38.943318 kernel: DMA32 empty Jan 13 20:16:38.943324 kernel: Normal empty Jan 13 20:16:38.943330 kernel: Movable zone start for each node Jan 13 20:16:38.943336 kernel: Early memory node ranges Jan 13 20:16:38.943342 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:16:38.943348 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:16:38.943355 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:16:38.943361 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:16:38.943367 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:16:38.943373 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:16:38.943379 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:16:38.943385 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:16:38.943392 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:16:38.943399 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:38.943405 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:38.943413 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:38.943420 kernel: psci: Trusted OS migration not required Jan 13 20:16:38.943426 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:38.943434 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:38.943441 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:38.943447 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:38.943454 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:16:38.943461 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:38.943473 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:38.943480 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:38.943486 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:38.943493 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:38.943499 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:38.943508 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:38.943514 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:38.943521 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:38.943527 kernel: alternatives: applying boot alternatives Jan 13 20:16:38.943534 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:38.943541 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:38.943548 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:38.943554 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:38.943561 kernel: Fallback order for Node 0: 0 Jan 13 20:16:38.943567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:16:38.943574 kernel: Policy zone: DMA Jan 13 20:16:38.943581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:38.943588 kernel: software IO TLB: area num 4. Jan 13 20:16:38.943594 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:16:38.943601 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Jan 13 20:16:38.943608 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:16:38.943615 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:38.943628 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:38.943635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:16:38.943642 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:38.943648 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:38.943655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:38.943662 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:16:38.943670 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:38.943676 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:38.943683 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:38.943689 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:38.943696 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:38.943703 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:38.943709 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:38.943716 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:38.943722 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:38.943729 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:16:38.943736 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:16:38.943744 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:38.943750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:38.943757 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:38.943787 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:38.943794 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:38.943801 kernel: arm-pv: using stolen time PV Jan 13 20:16:38.943808 kernel: Console: colour dummy device 80x25 Jan 13 20:16:38.943815 kernel: ACPI: Core revision 20230628 Jan 13 20:16:38.943822 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:38.943828 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:38.943837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:38.943844 kernel: landlock: Up and running. Jan 13 20:16:38.943850 kernel: SELinux: Initializing. Jan 13 20:16:38.943857 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:38.943864 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:38.943871 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:16:38.943878 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:16:38.943885 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:38.943892 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:38.943899 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:38.943906 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:38.943913 kernel: Remapping and enabling EFI services. Jan 13 20:16:38.943923 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:38.943930 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:38.943936 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:38.943943 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:16:38.943950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:38.943957 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:38.943963 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:16:38.943972 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:16:38.943978 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:16:38.943990 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:38.943998 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:16:38.944006 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:16:38.944013 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:16:38.944020 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:16:38.944027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:38.944034 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:16:38.944042 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:16:38.944049 kernel: SMP: Total of 4 processors activated. Jan 13 20:16:38.944056 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:38.944064 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:38.944071 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:38.944078 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:38.944085 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:38.944092 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:38.944101 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:38.944108 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:38.944115 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:38.944122 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:38.944129 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:38.944136 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:38.944143 kernel: devtmpfs: initialized Jan 13 20:16:38.944151 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:38.944158 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:16:38.944166 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:38.944173 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:38.944180 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:16:38.944187 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:38.944194 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:38.944201 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:38.944209 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:38.944216 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:38.944223 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:38.944231 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:38.944238 kernel: cpuidle: using governor menu Jan 13 20:16:38.944246 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:38.944253 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:38.944260 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:38.944267 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:38.944274 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:38.944281 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:38.944288 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:16:38.944297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:38.944304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:38.944311 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:38.944318 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:38.944325 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:38.944333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:38.944340 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:38.944347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:38.944354 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:38.944362 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:38.944369 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:38.944376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:38.944383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:38.944390 kernel: ACPI: Interpreter enabled Jan 13 20:16:38.944397 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:38.944404 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:38.944412 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:38.944419 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:38.944426 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:38.944569 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:38.944651 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:38.944716 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:38.944815 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:38.944882 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:38.944892 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:38.944899 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:38.944971 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:38.945034 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:38.945091 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:38.945146 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:38.945223 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:38.945296 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:16:38.945363 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:16:38.945427 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:16:38.945497 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:38.945560 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:38.945631 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:16:38.945697 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:16:38.945753 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:38.945883 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:38.945940 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:38.945950 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:38.945957 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:38.945964 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:38.945972 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:38.945979 kernel: iommu: Default domain type: Translated Jan 13 20:16:38.945986 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:38.945995 kernel: efivars: Registered efivars operations Jan 13 20:16:38.946002 kernel: vgaarb: loaded Jan 13 20:16:38.946009 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:38.946016 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:38.946024 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:38.946031 kernel: pnp: PnP ACPI init Jan 13 20:16:38.946102 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:38.946113 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:38.946120 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:38.946129 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:38.946136 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:38.946143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:38.946156 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:38.946164 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:38.946171 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:38.946178 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:38.946185 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:38.946194 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:38.946201 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:38.946208 kernel: kvm [1]: HYP mode not available Jan 13 20:16:38.946215 kernel: Initialise system trusted keyrings Jan 13 20:16:38.946222 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:38.946230 kernel: Key type asymmetric registered Jan 13 20:16:38.946236 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:38.946244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:38.946251 kernel: io scheduler mq-deadline registered Jan 13 20:16:38.946259 kernel: io scheduler kyber registered Jan 13 20:16:38.946266 kernel: io scheduler bfq registered Jan 13 20:16:38.946273 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:38.946280 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:38.946288 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:38.946354 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:16:38.946364 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:38.946371 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:38.946378 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:38.946385 kernel: nicpf, ver 1.0 Jan 13 20:16:38.946393 kernel: nicvf, ver 1.0 Jan 13 20:16:38.946463 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:38.946531 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:38 UTC (1736799398) Jan 13 20:16:38.946541 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:38.946548 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:38.946556 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:38.946563 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:38.946572 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:38.946579 kernel: Segment Routing with IPv6 Jan 13 20:16:38.946586 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:38.946593 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:38.946600 kernel: Key type dns_resolver registered Jan 13 20:16:38.946607 kernel: registered taskstats version 1 Jan 13 20:16:38.946614 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:38.946628 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:16:38.946636 kernel: Key type .fscrypt registered Jan 13 20:16:38.946642 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:38.946651 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:38.946658 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:38.946666 kernel: ima: No architecture policies found Jan 13 20:16:38.946673 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:38.946683 kernel: clk: Disabling unused clocks Jan 13 20:16:38.946690 kernel: Freeing unused kernel memory: 39680K Jan 13 20:16:38.946697 kernel: Run /init as init process Jan 13 20:16:38.946704 kernel: with arguments: Jan 13 20:16:38.946712 kernel: /init Jan 13 20:16:38.946719 kernel: with environment: Jan 13 20:16:38.946726 kernel: HOME=/ Jan 13 20:16:38.946733 kernel: TERM=linux Jan 13 20:16:38.946740 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:38.946749 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:38.946758 systemd[1]: Detected virtualization kvm. Jan 13 20:16:38.946775 systemd[1]: Detected architecture arm64. Jan 13 20:16:38.946784 systemd[1]: Running in initrd. Jan 13 20:16:38.946792 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:38.946799 systemd[1]: Hostname set to <localhost>. Jan 13 20:16:38.946807 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:38.946815 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:38.946822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:38.946830 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:38.946838 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:38.946847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:38.946855 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:38.946863 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:38.946873 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:38.946881 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:38.946889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:38.946897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:38.946906 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:38.946914 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:38.946922 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:38.946930 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:38.946938 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:38.946946 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:38.946954 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:38.946962 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:38.946971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:38.946979 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:38.946990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:38.946998 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:38.947006 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:38.947014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:38.947022 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:38.947030 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:38.947039 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:38.947048 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:38.947057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:38.947066 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:38.947078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:38.947086 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:38.947094 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:38.947123 systemd-journald[239]: Collecting audit messages is disabled. Jan 13 20:16:38.947144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:38.947155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:38.947163 systemd-journald[239]: Journal started Jan 13 20:16:38.947200 systemd-journald[239]: Runtime Journal (/run/log/journal/ea09b1194b814d18aa96d0baa7d5b0b1) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:16:38.956839 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:38.956864 kernel: Bridge firewalling registered Jan 13 20:16:38.933706 systemd-modules-load[240]: Inserted module 'overlay' Jan 13 20:16:38.958359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:38.950365 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 13 20:16:38.961332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:38.962778 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:38.963577 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:38.967262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:38.968535 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:38.970835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:38.977883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:38.982165 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:38.983195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:38.993982 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:38.995825 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:39.003365 dracut-cmdline[277]: dracut-dracut-053 Jan 13 20:16:39.005606 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:39.023121 systemd-resolved[280]: Positive Trust Anchors: Jan 13 20:16:39.023194 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:39.023226 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:39.027797 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 13 20:16:39.028794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:39.029828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:39.068804 kernel: SCSI subsystem initialized Jan 13 20:16:39.073780 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:39.080792 kernel: iscsi: registered transport (tcp) Jan 13 20:16:39.093040 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:39.093055 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:39.132891 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:39.139971 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:39.155785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:39.155819 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:39.156943 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:39.201792 kernel: raid6: neonx8 gen() 15745 MB/s Jan 13 20:16:39.218776 kernel: raid6: neonx4 gen() 15662 MB/s Jan 13 20:16:39.235781 kernel: raid6: neonx2 gen() 13322 MB/s Jan 13 20:16:39.252781 kernel: raid6: neonx1 gen() 10491 MB/s Jan 13 20:16:39.269774 kernel: raid6: int64x8 gen() 6934 MB/s Jan 13 20:16:39.286777 kernel: raid6: int64x4 gen() 7341 MB/s Jan 13 20:16:39.303778 kernel: raid6: int64x2 gen() 6131 MB/s Jan 13 20:16:39.320778 kernel: raid6: int64x1 gen() 5053 MB/s Jan 13 20:16:39.320797 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s Jan 13 20:16:39.337783 kernel: raid6: .... xor() 11925 MB/s, rmw enabled Jan 13 20:16:39.337795 kernel: raid6: using neon recovery algorithm Jan 13 20:16:39.342925 kernel: xor: measuring software checksum speed Jan 13 20:16:39.342942 kernel: 8regs : 19764 MB/sec Jan 13 20:16:39.344029 kernel: 32regs : 19664 MB/sec Jan 13 20:16:39.344042 kernel: arm64_neon : 27070 MB/sec Jan 13 20:16:39.344052 kernel: xor: using function: arm64_neon (27070 MB/sec) Jan 13 20:16:39.395268 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:39.406742 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:39.421007 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:39.434466 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jan 13 20:16:39.437791 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:39.448950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:39.460522 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 13 20:16:39.488829 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:39.503939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:39.543474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:39.549978 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:39.562215 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:39.563385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:39.565092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:39.567167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:39.578897 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:39.587428 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:16:39.593794 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:16:39.593908 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:39.593920 kernel: GPT:9289727 != 19775487 Jan 13 20:16:39.593936 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:39.593946 kernel: GPT:9289727 != 19775487 Jan 13 20:16:39.593957 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:39.593966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:39.587798 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:39.601553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:39.601686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:39.606625 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:39.608443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:39.608583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:39.618127 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (528) Jan 13 20:16:39.618149 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Jan 13 20:16:39.612310 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:39.622309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:39.631256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:16:39.636082 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:16:39.637256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:39.647088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:16:39.648018 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:16:39.655955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:16:39.668904 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:39.670892 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:39.675389 disk-uuid[555]: Primary Header is updated. Jan 13 20:16:39.675389 disk-uuid[555]: Secondary Entries is updated. Jan 13 20:16:39.675389 disk-uuid[555]: Secondary Header is updated. Jan 13 20:16:39.677791 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:39.690802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:39.694071 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:40.692793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:40.693796 disk-uuid[556]: The operation has completed successfully. Jan 13 20:16:40.713299 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:40.713407 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:40.735929 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:40.739006 sh[575]: Success Jan 13 20:16:40.752787 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:40.790265 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:40.791747 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:40.792477 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:40.803554 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:16:40.803584 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:40.803601 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:40.803618 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:40.804699 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:40.808075 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:40.809182 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:40.816873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:40.818156 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:40.826465 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:40.826502 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:40.826513 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:16:40.828787 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:16:40.834987 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:40.836776 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:40.841975 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:40.849005 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:40.916791 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:40.924947 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:40.927716 ignition[664]: Ignition 2.20.0 Jan 13 20:16:40.927723 ignition[664]: Stage: fetch-offline Jan 13 20:16:40.927755 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:40.927776 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:40.927924 ignition[664]: parsed url from cmdline: "" Jan 13 20:16:40.927927 ignition[664]: no config URL provided Jan 13 20:16:40.927931 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:40.927938 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:40.927963 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 13 20:16:40.927967 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:16:40.948836 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 13 20:16:40.953681 systemd-networkd[766]: lo: Link UP Jan 13 20:16:40.953691 systemd-networkd[766]: lo: Gained carrier Jan 13 20:16:40.954428 systemd-networkd[766]: Enumeration completed Jan 13 20:16:40.954525 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:40.956324 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:40.956327 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:40.957010 systemd[1]: Reached target network.target - Network. Jan 13 20:16:40.957028 systemd-networkd[766]: eth0: Link UP Jan 13 20:16:40.957031 systemd-networkd[766]: eth0: Gained carrier Jan 13 20:16:40.957037 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:40.971797 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:41.000378 ignition[664]: parsing config with SHA512: 6b3a02a409a4e113242c53376acf3cdcde29c4e594a7d8fb8a88c41999009e9307e186412f3fb792fa19a0673cd87e4677d4acc347f6d35bc0cebd2252ea6f5c Jan 13 20:16:41.005102 unknown[664]: fetched base config from "system" Jan 13 20:16:41.005112 unknown[664]: fetched user config from "qemu" Jan 13 20:16:41.005526 ignition[664]: fetch-offline: fetch-offline passed Jan 13 20:16:41.007459 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:41.005600 ignition[664]: Ignition finished successfully Jan 13 20:16:41.008546 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:16:41.016898 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:41.027601 ignition[773]: Ignition 2.20.0 Jan 13 20:16:41.027620 ignition[773]: Stage: kargs Jan 13 20:16:41.027799 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:41.027810 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:41.028708 ignition[773]: kargs: kargs passed Jan 13 20:16:41.028752 ignition[773]: Ignition finished successfully Jan 13 20:16:41.032309 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:41.041913 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:41.051164 ignition[781]: Ignition 2.20.0 Jan 13 20:16:41.051175 ignition[781]: Stage: disks Jan 13 20:16:41.051334 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:41.051344 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:41.052251 ignition[781]: disks: disks passed Jan 13 20:16:41.052294 ignition[781]: Ignition finished successfully Jan 13 20:16:41.056804 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:41.057725 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:41.058901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:41.060476 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:41.061906 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:41.063170 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:41.075930 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:41.085893 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:16:41.089588 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:41.091662 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:41.136809 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:41.137399 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:41.138444 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:41.154860 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:41.156682 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:41.157532 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:16:41.157567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:41.157589 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:41.162444 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:41.164029 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:41.168105 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Jan 13 20:16:41.168141 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:41.168151 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:41.169344 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:16:41.171849 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:16:41.172602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:41.208517 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:41.212229 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:41.215052 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:41.218526 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:41.287848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:41.296891 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:41.298194 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:41.302782 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:41.317361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:41.319549 ignition[915]: INFO : Ignition 2.20.0 Jan 13 20:16:41.319549 ignition[915]: INFO : Stage: mount Jan 13 20:16:41.320880 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:41.320880 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:41.320880 ignition[915]: INFO : mount: mount passed Jan 13 20:16:41.320880 ignition[915]: INFO : Ignition finished successfully Jan 13 20:16:41.321724 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:41.329903 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:41.802207 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:41.811967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:41.816780 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Jan 13 20:16:41.819214 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:41.819243 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:41.819254 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:16:41.821794 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:16:41.822191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:41.837452 ignition[947]: INFO : Ignition 2.20.0 Jan 13 20:16:41.837452 ignition[947]: INFO : Stage: files Jan 13 20:16:41.838740 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:41.838740 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:41.838740 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:41.841276 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:41.841276 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:41.843451 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:41.844405 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:41.844405 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:41.843987 unknown[947]: wrote ssh authorized keys file for user: core Jan 13 20:16:41.847166 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:16:41.847166 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:16:41.847166 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:41.847166 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:41.897269 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:16:42.007425 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:42.007425 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:42.010270 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:42.187937 systemd-networkd[766]: eth0: Gained IPv6LL Jan 13 20:16:42.334868 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 20:16:42.421245 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:42.422790 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:16:42.717094 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 20:16:42.929514 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:42.929514 ignition[947]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:42.932193 ignition[947]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:16:42.954198 ignition[947]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:16:42.957433 ignition[947]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:16:42.959474 ignition[947]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:16:42.959474 ignition[947]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:42.959474 ignition[947]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:42.959474 ignition[947]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:42.959474 ignition[947]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:42.959474 ignition[947]: INFO : files: files passed Jan 13 20:16:42.959474 ignition[947]: INFO : Ignition finished successfully Jan 13 20:16:42.960055 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:42.970974 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:42.973927 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:42.974944 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:42.976203 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:42.980320 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:16:42.982794 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:42.982794 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:42.985321 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:42.987251 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:42.988438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:42.999986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:43.016999 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:43.017111 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:43.018649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:43.019964 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:43.021228 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:43.021877 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:43.035531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:43.045906 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:43.052891 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:43.053742 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:43.055194 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:43.056503 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:43.056601 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:43.058405 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:43.059855 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:43.061018 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:43.062213 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:43.063572 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:43.064950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:43.066227 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:43.067692 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:43.069248 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:43.070463 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:43.071502 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:43.071600 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:43.073246 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:43.074549 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:43.076026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:43.076118 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:43.077501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:43.077598 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:43.079550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:43.079658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:43.081077 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:43.082209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:43.082321 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:43.083648 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:43.085053 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:43.086134 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:43.086214 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:43.087494 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:43.087564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:43.089041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:43.089137 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:43.090326 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:43.090415 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:43.101949 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:43.102567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:43.102687 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:43.104794 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:43.105898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:43.106012 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:43.107278 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:43.107369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:43.111382 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:43.111507 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:43.115476 ignition[1003]: INFO : Ignition 2.20.0 Jan 13 20:16:43.115476 ignition[1003]: INFO : Stage: umount Jan 13 20:16:43.115476 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:43.115476 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:43.115476 ignition[1003]: INFO : umount: umount passed Jan 13 20:16:43.115476 ignition[1003]: INFO : Ignition finished successfully Jan 13 20:16:43.115926 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:43.116009 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:43.117659 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:43.118012 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:43.118691 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:43.118741 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:43.120193 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:43.120234 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:43.121329 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:43.121365 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:43.122473 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:43.122512 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:43.124105 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:43.126380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:43.128797 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 13 20:16:43.130704 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:43.130848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:43.132517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:43.132545 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:43.142863 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:43.143491 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:43.143544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:43.145032 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:43.147938 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:43.148034 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:43.151827 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:43.151904 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:43.153057 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:43.153099 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:43.154342 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:43.154378 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:43.156720 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:43.156866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:43.170379 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:43.170520 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:43.172088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:43.172132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:43.173486 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:43.173521 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:43.174751 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:43.174808 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:43.176699 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:43.176739 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:43.178619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:43.178658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:43.194918 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:43.195647 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:43.195694 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:43.197269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:43.197308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:43.198866 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:43.198943 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:43.200178 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:43.200246 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:43.202096 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:43.203336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:43.203385 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:43.205197 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:43.213237 systemd[1]: Switching root. Jan 13 20:16:43.232298 systemd-journald[239]: Journal stopped Jan 13 20:16:43.947139 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:43.947192 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:43.947207 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:43.947217 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:43.947226 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:43.947241 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:43.947251 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:43.947260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:43.947273 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:43.947282 kernel: audit: type=1403 audit(1736799403.426:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:43.947293 systemd[1]: Successfully loaded SELinux policy in 34.060ms. Jan 13 20:16:43.947307 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.180ms. Jan 13 20:16:43.947318 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:43.947329 systemd[1]: Detected virtualization kvm. Jan 13 20:16:43.949957 systemd[1]: Detected architecture arm64. Jan 13 20:16:43.949994 systemd[1]: Detected first boot. Jan 13 20:16:43.950006 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:43.950017 zram_generator::config[1067]: No configuration found. Jan 13 20:16:43.950029 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:43.950045 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:43.950056 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:16:43.950072 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:43.950083 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:43.950094 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:43.950104 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:43.950115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:43.950128 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:43.950139 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:43.950152 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:43.950167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:43.950181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:43.950192 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:43.950209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:43.950225 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:43.950240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:43.950250 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:43.950261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:43.950273 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:43.950284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:43.950294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:43.950304 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:43.950315 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:43.950325 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:43.950336 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:43.950346 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:43.950358 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:43.950369 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:43.950380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:43.950391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:43.950401 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:43.950412 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:43.950422 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:43.950433 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:43.950443 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:43.950455 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:43.950466 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:43.950476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:43.950487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:43.950498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:43.950508 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:43.950519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:43.950529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:43.950541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:43.950551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:43.950561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:43.950572 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:43.950583 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:16:43.950594 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:16:43.950615 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:43.950627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:43.950638 kernel: fuse: init (API version 7.39) Jan 13 20:16:43.950650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:43.950660 kernel: loop: module loaded Jan 13 20:16:43.950671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:43.950681 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:43.950692 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:43.950702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:43.950713 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:43.950749 systemd-journald[1147]: Collecting audit messages is disabled. Jan 13 20:16:43.950792 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:43.950803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:43.950814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:43.950824 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:43.950835 systemd-journald[1147]: Journal started Jan 13 20:16:43.950855 systemd-journald[1147]: Runtime Journal (/run/log/journal/ea09b1194b814d18aa96d0baa7d5b0b1) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:16:43.953220 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:43.955085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:43.956181 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:43.956343 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:43.957452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:43.957837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:43.959165 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:43.959317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:43.960320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:43.960476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:43.961855 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:43.963016 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:43.963169 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:43.964303 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:43.964512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:43.965646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:43.967019 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:43.968374 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:43.979075 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:43.988855 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:43.990996 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:43.991823 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:43.993595 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:43.995897 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:43.997842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:43.999137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:44.000048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:44.001926 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:44.006684 systemd-journald[1147]: Time spent on flushing to /var/log/journal/ea09b1194b814d18aa96d0baa7d5b0b1 is 18.086ms for 848 entries. Jan 13 20:16:44.006684 systemd-journald[1147]: System Journal (/var/log/journal/ea09b1194b814d18aa96d0baa7d5b0b1) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:16:44.032096 systemd-journald[1147]: Received client request to flush runtime journal. Jan 13 20:16:44.003893 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:44.009084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:44.010184 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:44.011138 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:44.012910 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:44.015613 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:44.018006 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:44.031142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:44.032461 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:16:44.034446 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 13 20:16:44.034464 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 13 20:16:44.034702 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:44.040414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:44.052954 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:44.071410 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:44.081924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:44.092149 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 13 20:16:44.092167 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 13 20:16:44.095552 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:44.427328 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:44.437931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:44.457325 systemd-udevd[1228]: Using default interface naming scheme 'v255'. Jan 13 20:16:44.469829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:44.484015 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:44.492396 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 13 20:16:44.507035 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:44.519905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1243) Jan 13 20:16:44.553192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:16:44.556043 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:44.613901 systemd-networkd[1235]: lo: Link UP Jan 13 20:16:44.613915 systemd-networkd[1235]: lo: Gained carrier Jan 13 20:16:44.614720 systemd-networkd[1235]: Enumeration completed Jan 13 20:16:44.614993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:44.616068 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:44.618705 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:44.619204 systemd-networkd[1235]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:44.619214 systemd-networkd[1235]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:44.619881 systemd-networkd[1235]: eth0: Link UP Jan 13 20:16:44.619892 systemd-networkd[1235]: eth0: Gained carrier Jan 13 20:16:44.619904 systemd-networkd[1235]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:44.626803 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:44.630000 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:44.636171 systemd-networkd[1235]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:44.646904 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:44.656090 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:44.684454 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:44.685689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:44.696042 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:44.699117 lvm[1274]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:44.734254 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:44.735496 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:44.736585 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:44.736625 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:44.737515 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:44.739326 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:44.749927 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:44.752241 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:44.753243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:44.754235 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:44.756518 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:44.758943 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:44.763378 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:44.770628 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:44.772792 kernel: loop0: detected capacity change from 0 to 194512 Jan 13 20:16:44.779691 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:44.781137 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:44.783806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:44.820816 kernel: loop1: detected capacity change from 0 to 116808 Jan 13 20:16:44.869786 kernel: loop2: detected capacity change from 0 to 113536 Jan 13 20:16:44.903793 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 20:16:44.911792 kernel: loop4: detected capacity change from 0 to 116808 Jan 13 20:16:44.917792 kernel: loop5: detected capacity change from 0 to 113536 Jan 13 20:16:44.921266 (sd-merge)[1295]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:16:44.921656 (sd-merge)[1295]: Merged extensions into '/usr'. Jan 13 20:16:44.927109 systemd[1]: Reloading requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:44.927124 systemd[1]: Reloading... Jan 13 20:16:44.974243 zram_generator::config[1320]: No configuration found. Jan 13 20:16:45.000597 ldconfig[1278]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:45.072475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:45.113989 systemd[1]: Reloading finished in 186 ms. Jan 13 20:16:45.130479 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:45.131680 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:45.153991 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:45.155891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:45.159234 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:45.159248 systemd[1]: Reloading... Jan 13 20:16:45.171888 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:45.172138 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:45.172889 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:45.173116 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 13 20:16:45.173175 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 13 20:16:45.175163 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:45.175177 systemd-tmpfiles[1365]: Skipping /boot Jan 13 20:16:45.181682 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:45.181696 systemd-tmpfiles[1365]: Skipping /boot Jan 13 20:16:45.202857 zram_generator::config[1393]: No configuration found. Jan 13 20:16:45.290220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:45.333901 systemd[1]: Reloading finished in 174 ms. Jan 13 20:16:45.347850 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:45.368135 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:45.370564 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:45.372825 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:45.376475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:45.378859 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:45.386427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:45.389000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:45.394721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:45.398608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:45.399963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:45.403680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:45.403845 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:45.406510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:45.406690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:45.409055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:45.410797 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:45.410989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:45.416952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:45.426048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:45.431005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:45.436014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:45.436850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:45.440243 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:45.442005 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:45.446573 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:45.448358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:45.448519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:45.450109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:45.450270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:45.452227 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:45.454716 augenrules[1482]: No rules Jan 13 20:16:45.454960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:45.457747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:45.459288 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:45.459513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:45.471040 systemd-resolved[1438]: Positive Trust Anchors: Jan 13 20:16:45.471275 systemd-resolved[1438]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:45.471308 systemd-resolved[1438]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:45.484750 systemd-resolved[1438]: Defaulting to hostname 'linux'. Jan 13 20:16:45.490084 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:45.491086 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:45.492500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:45.494494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:45.498278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:45.501241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:45.503000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:45.503136 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:45.503818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:45.506369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:45.506528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:45.508009 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:45.508157 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:45.509505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:45.509705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:45.510169 augenrules[1496]: /sbin/augenrules: No change Jan 13 20:16:45.511325 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:45.511566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:45.515470 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:45.516480 augenrules[1523]: No rules Jan 13 20:16:45.518968 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:45.519297 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:45.524146 systemd[1]: Reached target network.target - Network. Jan 13 20:16:45.524853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:45.525737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:45.526065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:45.535007 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:16:45.576756 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:16:45.577588 systemd-timesyncd[1535]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:16:45.577646 systemd-timesyncd[1535]: Initial clock synchronization to Mon 2025-01-13 20:16:45.954436 UTC. Jan 13 20:16:45.578102 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:45.578931 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:16:45.579849 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:16:45.580736 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:16:45.581635 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:16:45.581666 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:45.582365 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:16:45.583267 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:16:45.584168 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:16:45.585061 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:45.586545 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:16:45.589117 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:16:45.591218 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:16:45.594852 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:16:45.595660 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:45.596400 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:45.597356 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:16:45.597411 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:45.597432 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:45.598807 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:16:45.600795 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:16:45.602696 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:16:45.606711 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:16:45.607520 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:16:45.609940 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:16:45.614965 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:16:45.615821 jq[1541]: false Jan 13 20:16:45.619933 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:16:45.625907 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:16:45.632055 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:16:45.639956 dbus-daemon[1540]: [system] SELinux support is enabled Jan 13 20:16:45.645543 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:16:45.647002 extend-filesystems[1543]: Found loop3 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found loop4 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found loop5 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda1 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda2 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda3 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found usr Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda4 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda6 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda7 Jan 13 20:16:45.647788 extend-filesystems[1543]: Found vda9 Jan 13 20:16:45.647788 extend-filesystems[1543]: Checking size of /dev/vda9 Jan 13 20:16:45.648048 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:16:45.651382 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:16:45.653834 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:16:45.658184 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:16:45.658433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:16:45.658699 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:16:45.658922 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:16:45.660859 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:16:45.661073 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:16:45.671789 jq[1563]: true Jan 13 20:16:45.676022 extend-filesystems[1543]: Resized partition /dev/vda9 Jan 13 20:16:45.677110 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:16:45.682848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:16:45.682883 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:16:45.685176 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:16:45.685197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:16:45.697784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1230) Jan 13 20:16:45.698886 extend-filesystems[1581]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:16:45.700270 jq[1579]: true Jan 13 20:16:45.705239 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:16:45.708900 tar[1567]: linux-arm64/helm Jan 13 20:16:45.735462 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:16:45.736712 systemd-logind[1553]: New seat seat0. Jan 13 20:16:45.739196 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:16:45.742994 update_engine[1562]: I20250113 20:16:45.742786 1562 main.cc:92] Flatcar Update Engine starting Jan 13 20:16:45.750285 update_engine[1562]: I20250113 20:16:45.748261 1562 update_check_scheduler.cc:74] Next update check in 5m10s Jan 13 20:16:45.749215 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:16:45.751377 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:16:45.764794 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:16:45.762024 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:16:45.771221 extend-filesystems[1581]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:16:45.771221 extend-filesystems[1581]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:16:45.771221 extend-filesystems[1581]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:16:45.770188 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:16:45.774118 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Jan 13 20:16:45.770488 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:16:45.785043 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:45.786846 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:16:45.789274 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:16:45.812021 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:16:45.923975 containerd[1570]: time="2025-01-13T20:16:45.923831840Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:16:45.959567 containerd[1570]: time="2025-01-13T20:16:45.959512600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961060 containerd[1570]: time="2025-01-13T20:16:45.961009520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961060 containerd[1570]: time="2025-01-13T20:16:45.961049280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:16:45.961060 containerd[1570]: time="2025-01-13T20:16:45.961082360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:16:45.961273 containerd[1570]: time="2025-01-13T20:16:45.961239800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:16:45.961273 containerd[1570]: time="2025-01-13T20:16:45.961264560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961341 containerd[1570]: time="2025-01-13T20:16:45.961321160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961341 containerd[1570]: time="2025-01-13T20:16:45.961337600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961574 containerd[1570]: time="2025-01-13T20:16:45.961541440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961574 containerd[1570]: time="2025-01-13T20:16:45.961563640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961643 containerd[1570]: time="2025-01-13T20:16:45.961576960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961643 containerd[1570]: time="2025-01-13T20:16:45.961586640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961680 containerd[1570]: time="2025-01-13T20:16:45.961665760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.961901 containerd[1570]: time="2025-01-13T20:16:45.961879920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:45.962058 containerd[1570]: time="2025-01-13T20:16:45.962018760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:45.962058 containerd[1570]: time="2025-01-13T20:16:45.962038920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:16:45.962129 containerd[1570]: time="2025-01-13T20:16:45.962114360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:16:45.962173 containerd[1570]: time="2025-01-13T20:16:45.962160680Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:16:45.966188 containerd[1570]: time="2025-01-13T20:16:45.966157680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:16:45.966265 containerd[1570]: time="2025-01-13T20:16:45.966219200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:16:45.966265 containerd[1570]: time="2025-01-13T20:16:45.966235360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:16:45.966265 containerd[1570]: time="2025-01-13T20:16:45.966250400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:16:45.966325 containerd[1570]: time="2025-01-13T20:16:45.966267640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:16:45.966425 containerd[1570]: time="2025-01-13T20:16:45.966403920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:16:45.966751 containerd[1570]: time="2025-01-13T20:16:45.966731480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:16:45.966875 containerd[1570]: time="2025-01-13T20:16:45.966854160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:16:45.966904 containerd[1570]: time="2025-01-13T20:16:45.966877600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:16:45.966904 containerd[1570]: time="2025-01-13T20:16:45.966891840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:16:45.966938 containerd[1570]: time="2025-01-13T20:16:45.966905360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.966938 containerd[1570]: time="2025-01-13T20:16:45.966920080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.966938 containerd[1570]: time="2025-01-13T20:16:45.966932920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.967007 containerd[1570]: time="2025-01-13T20:16:45.966946560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.967007 containerd[1570]: time="2025-01-13T20:16:45.966961160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.967007 containerd[1570]: time="2025-01-13T20:16:45.966974280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.967007 containerd[1570]: time="2025-01-13T20:16:45.966986680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.967007 containerd[1570]: time="2025-01-13T20:16:45.966999120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:16:45.967085 containerd[1570]: time="2025-01-13T20:16:45.967020800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967085 containerd[1570]: time="2025-01-13T20:16:45.967051600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967085 containerd[1570]: time="2025-01-13T20:16:45.967064560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967085 containerd[1570]: time="2025-01-13T20:16:45.967078600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967155 containerd[1570]: time="2025-01-13T20:16:45.967091720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967155 containerd[1570]: time="2025-01-13T20:16:45.967105680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967155 containerd[1570]: time="2025-01-13T20:16:45.967120280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967155 containerd[1570]: time="2025-01-13T20:16:45.967132600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967155 containerd[1570]: time="2025-01-13T20:16:45.967145280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967237 containerd[1570]: time="2025-01-13T20:16:45.967158520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967237 containerd[1570]: time="2025-01-13T20:16:45.967170640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967237 containerd[1570]: time="2025-01-13T20:16:45.967182360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967237 containerd[1570]: time="2025-01-13T20:16:45.967195360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967237 containerd[1570]: time="2025-01-13T20:16:45.967210520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:16:45.967237 containerd[1570]: time="2025-01-13T20:16:45.967232360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967341 containerd[1570]: time="2025-01-13T20:16:45.967246080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967341 containerd[1570]: time="2025-01-13T20:16:45.967257560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:16:45.967481 containerd[1570]: time="2025-01-13T20:16:45.967428880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:16:45.967481 containerd[1570]: time="2025-01-13T20:16:45.967453680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:16:45.967481 containerd[1570]: time="2025-01-13T20:16:45.967465200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:16:45.967481 containerd[1570]: time="2025-01-13T20:16:45.967476800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:16:45.967568 containerd[1570]: time="2025-01-13T20:16:45.967485720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967568 containerd[1570]: time="2025-01-13T20:16:45.967497800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:16:45.967568 containerd[1570]: time="2025-01-13T20:16:45.967507200Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:16:45.967568 containerd[1570]: time="2025-01-13T20:16:45.967517440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:16:45.967918 containerd[1570]: time="2025-01-13T20:16:45.967864640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:16:45.968035 containerd[1570]: time="2025-01-13T20:16:45.967922880Z" level=info msg="Connect containerd service" Jan 13 20:16:45.968035 containerd[1570]: time="2025-01-13T20:16:45.967958240Z" level=info msg="using legacy CRI server" Jan 13 20:16:45.968035 containerd[1570]: time="2025-01-13T20:16:45.967965760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:16:45.968212 containerd[1570]: time="2025-01-13T20:16:45.968193280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:16:45.968933 containerd[1570]: time="2025-01-13T20:16:45.968901280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:16:45.969116 containerd[1570]: time="2025-01-13T20:16:45.969085160Z" level=info msg="Start subscribing containerd event" Jan 13 20:16:45.969157 containerd[1570]: time="2025-01-13T20:16:45.969134400Z" level=info msg="Start recovering state" Jan 13 20:16:45.969329 containerd[1570]: time="2025-01-13T20:16:45.969196720Z" level=info msg="Start event monitor" Jan 13 20:16:45.969329 containerd[1570]: time="2025-01-13T20:16:45.969214880Z" level=info msg="Start snapshots syncer" Jan 13 20:16:45.969329 containerd[1570]: time="2025-01-13T20:16:45.969226600Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:16:45.969329 containerd[1570]: time="2025-01-13T20:16:45.969233640Z" level=info msg="Start streaming server" Jan 13 20:16:45.969909 containerd[1570]: time="2025-01-13T20:16:45.969888800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:16:45.970036 containerd[1570]: time="2025-01-13T20:16:45.969939200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:16:45.970109 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:16:45.971366 containerd[1570]: time="2025-01-13T20:16:45.971333800Z" level=info msg="containerd successfully booted in 0.051637s" Jan 13 20:16:46.089958 tar[1567]: linux-arm64/LICENSE Jan 13 20:16:46.089958 tar[1567]: linux-arm64/README.md Jan 13 20:16:46.109775 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:16:46.349622 systemd-networkd[1235]: eth0: Gained IPv6LL Jan 13 20:16:46.352924 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:46.354831 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:46.364503 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:16:46.366068 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:16:46.369036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:46.371312 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:16:46.388955 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:16:46.397272 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:16:46.398777 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:16:46.399058 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:16:46.400920 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:16:46.404386 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:16:46.409211 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:16:46.409488 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:16:46.420085 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:16:46.431308 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:16:46.440108 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:16:46.442491 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:16:46.443694 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:16:46.897268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:16:46.898608 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:16:46.900085 systemd[1]: Startup finished in 5.266s (kernel) + 3.508s (userspace) = 8.775s. Jan 13 20:16:46.901486 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:16:47.418601 kubelet[1676]: E0113 20:16:47.418519 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:16:47.421734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:16:47.421965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:16:51.444488 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:16:51.461025 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Jan 13 20:16:51.523243 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:51.526741 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:51.534461 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:16:51.546019 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:16:51.548955 systemd-logind[1553]: New session 1 of user core. Jan 13 20:16:51.554931 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:16:51.556973 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:16:51.562658 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:16:51.632723 systemd[1696]: Queued start job for default target default.target. Jan 13 20:16:51.633063 systemd[1696]: Created slice app.slice - User Application Slice. Jan 13 20:16:51.633085 systemd[1696]: Reached target paths.target - Paths. Jan 13 20:16:51.633097 systemd[1696]: Reached target timers.target - Timers. Jan 13 20:16:51.643876 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:16:51.649165 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:16:51.649220 systemd[1696]: Reached target sockets.target - Sockets. Jan 13 20:16:51.649231 systemd[1696]: Reached target basic.target - Basic System. Jan 13 20:16:51.649266 systemd[1696]: Reached target default.target - Main User Target. Jan 13 20:16:51.649288 systemd[1696]: Startup finished in 82ms. Jan 13 20:16:51.649590 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:16:51.651074 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:16:51.716146 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Jan 13 20:16:51.757015 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:51.758120 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:51.762035 systemd-logind[1553]: New session 2 of user core. Jan 13 20:16:51.776048 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:16:51.827843 sshd[1711]: Connection closed by 10.0.0.1 port 53982 Jan 13 20:16:51.827517 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:51.844024 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:53994.service - OpenSSH per-connection server daemon (10.0.0.1:53994). Jan 13 20:16:51.844436 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:53982.service: Deactivated successfully. Jan 13 20:16:51.845841 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:16:51.846815 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:16:51.847926 systemd-logind[1553]: Removed session 2. Jan 13 20:16:51.884998 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 53994 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:51.886028 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:51.889771 systemd-logind[1553]: New session 3 of user core. Jan 13 20:16:51.899067 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:16:51.947429 sshd[1719]: Connection closed by 10.0.0.1 port 53994 Jan 13 20:16:51.949350 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:51.960039 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:54002.service - OpenSSH per-connection server daemon (10.0.0.1:54002). Jan 13 20:16:51.960407 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:53994.service: Deactivated successfully. Jan 13 20:16:51.962217 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:16:51.962756 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:16:51.964058 systemd-logind[1553]: Removed session 3. Jan 13 20:16:52.000967 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 54002 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:52.001955 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:52.005244 systemd-logind[1553]: New session 4 of user core. Jan 13 20:16:52.016029 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:16:52.067827 sshd[1727]: Connection closed by 10.0.0.1 port 54002 Jan 13 20:16:52.067747 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:52.076016 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:54018.service - OpenSSH per-connection server daemon (10.0.0.1:54018). Jan 13 20:16:52.076372 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:54002.service: Deactivated successfully. Jan 13 20:16:52.077901 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:16:52.078625 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:16:52.079203 systemd-logind[1553]: Removed session 4. Jan 13 20:16:52.116722 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 54018 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:52.117843 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:52.121465 systemd-logind[1553]: New session 5 of user core. Jan 13 20:16:52.137122 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:16:52.207360 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:16:52.207649 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:52.219707 sudo[1736]: pam_unix(sudo:session): session closed for user root Jan 13 20:16:52.222162 sshd[1735]: Connection closed by 10.0.0.1 port 54018 Jan 13 20:16:52.221411 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:52.238058 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:54026.service - OpenSSH per-connection server daemon (10.0.0.1:54026). Jan 13 20:16:52.238471 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:54018.service: Deactivated successfully. Jan 13 20:16:52.239976 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:16:52.240559 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:16:52.241711 systemd-logind[1553]: Removed session 5. Jan 13 20:16:52.279304 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 54026 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:52.280337 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:52.284104 systemd-logind[1553]: New session 6 of user core. Jan 13 20:16:52.297015 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:16:52.348823 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:16:52.349097 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:52.351946 sudo[1746]: pam_unix(sudo:session): session closed for user root Jan 13 20:16:52.356137 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:16:52.356401 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:52.378051 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:52.399904 augenrules[1768]: No rules Jan 13 20:16:52.400482 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:52.400705 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:52.401542 sudo[1745]: pam_unix(sudo:session): session closed for user root Jan 13 20:16:52.402615 sshd[1744]: Connection closed by 10.0.0.1 port 54026 Jan 13 20:16:52.403041 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:52.407139 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:56318.service - OpenSSH per-connection server daemon (10.0.0.1:56318). Jan 13 20:16:52.407498 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:54026.service: Deactivated successfully. Jan 13 20:16:52.409776 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:16:52.409955 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:16:52.411104 systemd-logind[1553]: Removed session 6. Jan 13 20:16:52.449387 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 56318 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:52.450417 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:52.453920 systemd-logind[1553]: New session 7 of user core. Jan 13 20:16:52.460127 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:16:52.510398 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:16:52.510667 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:52.832990 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:16:52.833177 (dockerd)[1802]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:16:53.080863 dockerd[1802]: time="2025-01-13T20:16:53.080806790Z" level=info msg="Starting up" Jan 13 20:16:53.330961 dockerd[1802]: time="2025-01-13T20:16:53.330856192Z" level=info msg="Loading containers: start." Jan 13 20:16:53.468809 kernel: Initializing XFRM netlink socket Jan 13 20:16:53.553604 systemd-networkd[1235]: docker0: Link UP Jan 13 20:16:53.581837 dockerd[1802]: time="2025-01-13T20:16:53.581730663Z" level=info msg="Loading containers: done." Jan 13 20:16:53.597195 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1580896787-merged.mount: Deactivated successfully. Jan 13 20:16:53.599600 dockerd[1802]: time="2025-01-13T20:16:53.599558085Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:16:53.599674 dockerd[1802]: time="2025-01-13T20:16:53.599643028Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:16:53.599761 dockerd[1802]: time="2025-01-13T20:16:53.599736078Z" level=info msg="Daemon has completed initialization" Jan 13 20:16:53.629139 dockerd[1802]: time="2025-01-13T20:16:53.629072947Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:16:53.629240 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:16:54.339963 containerd[1570]: time="2025-01-13T20:16:54.339843914Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:16:55.015992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775732570.mount: Deactivated successfully. Jan 13 20:16:56.135072 containerd[1570]: time="2025-01-13T20:16:56.135014108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:56.135416 containerd[1570]: time="2025-01-13T20:16:56.135371888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 20:16:56.136410 containerd[1570]: time="2025-01-13T20:16:56.136371325Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:56.140125 containerd[1570]: time="2025-01-13T20:16:56.139929206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:56.142609 containerd[1570]: time="2025-01-13T20:16:56.142572813Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.802688047s" Jan 13 20:16:56.142663 containerd[1570]: time="2025-01-13T20:16:56.142611041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:16:56.161303 containerd[1570]: time="2025-01-13T20:16:56.161066967Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:16:57.672538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:57.685026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:57.780211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:16:57.784363 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:16:57.849984 kubelet[2083]: E0113 20:16:57.849920 2083 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:16:57.853706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:16:57.853965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:16:57.860822 containerd[1570]: time="2025-01-13T20:16:57.860771244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:57.861689 containerd[1570]: time="2025-01-13T20:16:57.861442769Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 20:16:57.864031 containerd[1570]: time="2025-01-13T20:16:57.862913429Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:57.866266 containerd[1570]: time="2025-01-13T20:16:57.866222938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:57.867471 containerd[1570]: time="2025-01-13T20:16:57.867377445Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.706271533s" Jan 13 20:16:57.867471 containerd[1570]: time="2025-01-13T20:16:57.867408620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:16:57.887998 containerd[1570]: time="2025-01-13T20:16:57.887952082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:16:58.903599 containerd[1570]: time="2025-01-13T20:16:58.903549881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:58.904685 containerd[1570]: time="2025-01-13T20:16:58.904443643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 20:16:58.905528 containerd[1570]: time="2025-01-13T20:16:58.905479178Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:58.909045 containerd[1570]: time="2025-01-13T20:16:58.909000603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:58.909755 containerd[1570]: time="2025-01-13T20:16:58.909724367Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.021728707s" Jan 13 20:16:58.909755 containerd[1570]: time="2025-01-13T20:16:58.909753723Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:16:58.928671 containerd[1570]: time="2025-01-13T20:16:58.928576878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:16:59.919255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694931998.mount: Deactivated successfully. Jan 13 20:17:00.242571 containerd[1570]: time="2025-01-13T20:17:00.242118858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:00.243089 containerd[1570]: time="2025-01-13T20:17:00.242703958Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 20:17:00.243945 containerd[1570]: time="2025-01-13T20:17:00.243865376Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:00.245863 containerd[1570]: time="2025-01-13T20:17:00.245820922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:00.247085 containerd[1570]: time="2025-01-13T20:17:00.247051721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.318438113s" Jan 13 20:17:00.247154 containerd[1570]: time="2025-01-13T20:17:00.247089755Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:17:00.267033 containerd[1570]: time="2025-01-13T20:17:00.266994337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:17:00.772159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3172907782.mount: Deactivated successfully. Jan 13 20:17:01.405608 containerd[1570]: time="2025-01-13T20:17:01.405548304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.406799 containerd[1570]: time="2025-01-13T20:17:01.406373437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:17:01.407618 containerd[1570]: time="2025-01-13T20:17:01.407564924Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.410597 containerd[1570]: time="2025-01-13T20:17:01.410538228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.411899 containerd[1570]: time="2025-01-13T20:17:01.411842065Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.144805794s" Jan 13 20:17:01.411899 containerd[1570]: time="2025-01-13T20:17:01.411879703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:17:01.430625 containerd[1570]: time="2025-01-13T20:17:01.430591032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:17:01.847186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641069469.mount: Deactivated successfully. Jan 13 20:17:01.852468 containerd[1570]: time="2025-01-13T20:17:01.852404077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.852843 containerd[1570]: time="2025-01-13T20:17:01.852798529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 20:17:01.853676 containerd[1570]: time="2025-01-13T20:17:01.853635657Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.855903 containerd[1570]: time="2025-01-13T20:17:01.855871824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.856935 containerd[1570]: time="2025-01-13T20:17:01.856790669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 426.145858ms" Jan 13 20:17:01.856935 containerd[1570]: time="2025-01-13T20:17:01.856833459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:17:01.875134 containerd[1570]: time="2025-01-13T20:17:01.875098891Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:17:02.441356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392429929.mount: Deactivated successfully. Jan 13 20:17:04.029343 containerd[1570]: time="2025-01-13T20:17:04.029289729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:04.030424 containerd[1570]: time="2025-01-13T20:17:04.030389794Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 20:17:04.030762 containerd[1570]: time="2025-01-13T20:17:04.030739156Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:04.034074 containerd[1570]: time="2025-01-13T20:17:04.034012479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:04.035427 containerd[1570]: time="2025-01-13T20:17:04.035379878Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.160244532s" Jan 13 20:17:04.035427 containerd[1570]: time="2025-01-13T20:17:04.035411572Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:17:08.104318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:08.113977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:08.284427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:08.288293 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:08.326930 kubelet[2316]: E0113 20:17:08.326831 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:08.329694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:08.329902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:10.027105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:10.038975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:10.053448 systemd[1]: Reloading requested from client PID 2336 ('systemctl') (unit session-7.scope)... Jan 13 20:17:10.053465 systemd[1]: Reloading... Jan 13 20:17:10.112945 zram_generator::config[2376]: No configuration found. Jan 13 20:17:10.221543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:17:10.269251 systemd[1]: Reloading finished in 215 ms. Jan 13 20:17:10.308197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:10.311164 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:17:10.311399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:10.326042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:10.411459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:10.415251 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:17:10.453271 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:10.453271 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:17:10.453271 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:10.453627 kubelet[2435]: I0113 20:17:10.453323 2435 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:17:11.415288 kubelet[2435]: I0113 20:17:11.415253 2435 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:17:11.416616 kubelet[2435]: I0113 20:17:11.415435 2435 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:17:11.416616 kubelet[2435]: I0113 20:17:11.415658 2435 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:17:11.440521 kubelet[2435]: I0113 20:17:11.440496 2435 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:17:11.441885 kubelet[2435]: E0113 20:17:11.441867 2435 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.448969 kubelet[2435]: I0113 20:17:11.448938 2435 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:17:11.449315 kubelet[2435]: I0113 20:17:11.449289 2435 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:17:11.449492 kubelet[2435]: I0113 20:17:11.449470 2435 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:17:11.449574 kubelet[2435]: I0113 20:17:11.449494 2435 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:17:11.449574 kubelet[2435]: I0113 20:17:11.449504 2435 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:17:11.450590 kubelet[2435]: I0113 20:17:11.450556 2435 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:11.454459 kubelet[2435]: I0113 20:17:11.454434 2435 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:17:11.454735 kubelet[2435]: I0113 20:17:11.454464 2435 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:17:11.454735 kubelet[2435]: I0113 20:17:11.454484 2435 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:17:11.454735 kubelet[2435]: I0113 20:17:11.454495 2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:17:11.457152 kubelet[2435]: W0113 20:17:11.456873 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.457152 kubelet[2435]: E0113 20:17:11.456939 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.457258 kubelet[2435]: W0113 20:17:11.457152 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.457258 kubelet[2435]: E0113 20:17:11.457192 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.459150 kubelet[2435]: I0113 20:17:11.459114 2435 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:17:11.459786 kubelet[2435]: I0113 20:17:11.459756 2435 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:17:11.460357 kubelet[2435]: W0113 20:17:11.460327 2435 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:17:11.461183 kubelet[2435]: I0113 20:17:11.461161 2435 server.go:1256] "Started kubelet" Jan 13 20:17:11.461360 kubelet[2435]: I0113 20:17:11.461328 2435 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:17:11.461571 kubelet[2435]: I0113 20:17:11.461553 2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:17:11.461886 kubelet[2435]: I0113 20:17:11.461867 2435 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:17:11.463236 kubelet[2435]: I0113 20:17:11.463203 2435 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:17:11.464127 kubelet[2435]: I0113 20:17:11.463503 2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:17:11.464384 kubelet[2435]: E0113 20:17:11.464369 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:11.464443 kubelet[2435]: I0113 20:17:11.464399 2435 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:17:11.464630 kubelet[2435]: I0113 20:17:11.464513 2435 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:17:11.464630 kubelet[2435]: I0113 20:17:11.464587 2435 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:17:11.464935 kubelet[2435]: W0113 20:17:11.464892 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.464993 kubelet[2435]: E0113 20:17:11.464941 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.467133 kubelet[2435]: E0113 20:17:11.465497 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Jan 13 20:17:11.469431 kubelet[2435]: I0113 20:17:11.469400 2435 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:17:11.469549 kubelet[2435]: I0113 20:17:11.469489 2435 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:17:11.470416 kubelet[2435]: E0113 20:17:11.469872 2435 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:17:11.470739 kubelet[2435]: I0113 20:17:11.470699 2435 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:17:11.472868 kubelet[2435]: E0113 20:17:11.472836 2435 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a59e186576153 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:17:11.461134675 +0000 UTC m=+1.042375236,LastTimestamp:2025-01-13 20:17:11.461134675 +0000 UTC m=+1.042375236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:17:11.480567 kubelet[2435]: I0113 20:17:11.480533 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:17:11.481456 kubelet[2435]: I0113 20:17:11.481433 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:17:11.481456 kubelet[2435]: I0113 20:17:11.481454 2435 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:17:11.481516 kubelet[2435]: I0113 20:17:11.481485 2435 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:17:11.481553 kubelet[2435]: E0113 20:17:11.481538 2435 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:17:11.484698 kubelet[2435]: W0113 20:17:11.484657 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.484698 kubelet[2435]: E0113 20:17:11.484699 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:11.488418 kubelet[2435]: I0113 20:17:11.488398 2435 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:17:11.488418 kubelet[2435]: I0113 20:17:11.488417 2435 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:17:11.488503 kubelet[2435]: I0113 20:17:11.488435 2435 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:11.548444 kubelet[2435]: I0113 20:17:11.548403 2435 policy_none.go:49] "None policy: Start" Jan 13 20:17:11.549260 kubelet[2435]: I0113 20:17:11.549229 2435 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:17:11.549349 kubelet[2435]: I0113 20:17:11.549277 2435 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:17:11.554769 kubelet[2435]: I0113 20:17:11.554741 2435 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:17:11.555060 kubelet[2435]: I0113 20:17:11.555044 2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:17:11.556509 kubelet[2435]: E0113 20:17:11.556492 2435 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:17:11.565594 kubelet[2435]: I0113 20:17:11.565577 2435 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:11.565989 kubelet[2435]: E0113 20:17:11.565974 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jan 13 20:17:11.582229 kubelet[2435]: I0113 20:17:11.582204 2435 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:17:11.584941 kubelet[2435]: I0113 20:17:11.584891 2435 topology_manager.go:215] "Topology Admit Handler" podUID="4e043ae940ea23cbb8e26e59966bb142" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:17:11.586689 kubelet[2435]: I0113 20:17:11.586615 2435 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:17:11.666086 kubelet[2435]: E0113 20:17:11.665967 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Jan 13 20:17:11.765631 kubelet[2435]: I0113 20:17:11.765536 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e043ae940ea23cbb8e26e59966bb142-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e043ae940ea23cbb8e26e59966bb142\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:11.765631 kubelet[2435]: I0113 20:17:11.765600 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e043ae940ea23cbb8e26e59966bb142-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e043ae940ea23cbb8e26e59966bb142\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:11.765631 kubelet[2435]: I0113 20:17:11.765627 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:11.765828 kubelet[2435]: I0113 20:17:11.765672 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:11.765828 kubelet[2435]: I0113 20:17:11.765716 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:17:11.765828 kubelet[2435]: I0113 20:17:11.765742 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:11.765828 kubelet[2435]: I0113 20:17:11.765782 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:11.765828 kubelet[2435]: I0113 20:17:11.765805 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:11.765933 kubelet[2435]: I0113 20:17:11.765827 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e043ae940ea23cbb8e26e59966bb142-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e043ae940ea23cbb8e26e59966bb142\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:11.767635 kubelet[2435]: I0113 20:17:11.767594 2435 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:11.767946 kubelet[2435]: E0113 20:17:11.767919 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jan 13 20:17:11.889507 kubelet[2435]: E0113 20:17:11.889461 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:11.889694 kubelet[2435]: E0113 20:17:11.889642 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:11.890208 containerd[1570]: time="2025-01-13T20:17:11.890171381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:11.890521 containerd[1570]: time="2025-01-13T20:17:11.890205077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e043ae940ea23cbb8e26e59966bb142,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:11.891637 kubelet[2435]: E0113 20:17:11.891616 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:11.892008 containerd[1570]: time="2025-01-13T20:17:11.891966859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:12.066478 kubelet[2435]: E0113 20:17:12.066371 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Jan 13 20:17:12.168929 kubelet[2435]: I0113 20:17:12.168886 2435 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:12.169264 kubelet[2435]: E0113 20:17:12.169245 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jan 13 20:17:12.353854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929626399.mount: Deactivated successfully. Jan 13 20:17:12.358966 containerd[1570]: time="2025-01-13T20:17:12.358906773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:12.360769 containerd[1570]: time="2025-01-13T20:17:12.360717459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:17:12.361478 containerd[1570]: time="2025-01-13T20:17:12.361429459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:12.362660 containerd[1570]: time="2025-01-13T20:17:12.362621001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:12.362915 containerd[1570]: time="2025-01-13T20:17:12.362860871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:17:12.363598 containerd[1570]: time="2025-01-13T20:17:12.363543549Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:12.364223 containerd[1570]: time="2025-01-13T20:17:12.364111419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:17:12.365989 containerd[1570]: time="2025-01-13T20:17:12.365958958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:12.369150 containerd[1570]: time="2025-01-13T20:17:12.369053080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.787143ms" Jan 13 20:17:12.370449 containerd[1570]: time="2025-01-13T20:17:12.370417674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.160672ms" Jan 13 20:17:12.372417 containerd[1570]: time="2025-01-13T20:17:12.372390036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.367326ms" Jan 13 20:17:12.468430 kubelet[2435]: W0113 20:17:12.468319 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:12.468430 kubelet[2435]: E0113 20:17:12.468382 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:12.495438 containerd[1570]: time="2025-01-13T20:17:12.495337464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:12.495438 containerd[1570]: time="2025-01-13T20:17:12.495404803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:12.495438 containerd[1570]: time="2025-01-13T20:17:12.495415579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:12.495644 containerd[1570]: time="2025-01-13T20:17:12.495494454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:12.497438 containerd[1570]: time="2025-01-13T20:17:12.497016959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:12.497438 containerd[1570]: time="2025-01-13T20:17:12.497070317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:12.497438 containerd[1570]: time="2025-01-13T20:17:12.497085098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:12.497438 containerd[1570]: time="2025-01-13T20:17:12.497158726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:12.498998 containerd[1570]: time="2025-01-13T20:17:12.498899229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:12.498998 containerd[1570]: time="2025-01-13T20:17:12.498977624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:12.499176 containerd[1570]: time="2025-01-13T20:17:12.499106092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:12.501599 containerd[1570]: time="2025-01-13T20:17:12.500238787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:12.548610 containerd[1570]: time="2025-01-13T20:17:12.548264849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"39411dc5947dfeeced5c96e4fa7d8ca1bb9684f47d744de536c4ef9544fcbc08\"" Jan 13 20:17:12.550555 containerd[1570]: time="2025-01-13T20:17:12.550524912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1e6377a221410f95c96ec421a087e6f3ca881e0514f64b1c5ce89e79d916d68\"" Jan 13 20:17:12.550996 containerd[1570]: time="2025-01-13T20:17:12.550971164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e043ae940ea23cbb8e26e59966bb142,Namespace:kube-system,Attempt:0,} returns sandbox id \"a020e7897fba91baa63e0ffc8a22cd77b02f1ec918fd64a049048476a62b689d\"" Jan 13 20:17:12.551833 kubelet[2435]: E0113 20:17:12.551596 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:12.552248 kubelet[2435]: E0113 20:17:12.552121 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:12.552248 kubelet[2435]: E0113 20:17:12.552146 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:12.554571 containerd[1570]: time="2025-01-13T20:17:12.554532849Z" level=info msg="CreateContainer within sandbox \"39411dc5947dfeeced5c96e4fa7d8ca1bb9684f47d744de536c4ef9544fcbc08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:17:12.554652 containerd[1570]: time="2025-01-13T20:17:12.554537976Z" level=info msg="CreateContainer within sandbox \"a020e7897fba91baa63e0ffc8a22cd77b02f1ec918fd64a049048476a62b689d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:17:12.554946 containerd[1570]: time="2025-01-13T20:17:12.554919294Z" level=info msg="CreateContainer within sandbox \"e1e6377a221410f95c96ec421a087e6f3ca881e0514f64b1c5ce89e79d916d68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:17:12.567433 containerd[1570]: time="2025-01-13T20:17:12.567383949Z" level=info msg="CreateContainer within sandbox \"39411dc5947dfeeced5c96e4fa7d8ca1bb9684f47d744de536c4ef9544fcbc08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"395765a55352cbc8a9068ba146e23fa218433bef61ab40f2101dd20c036e1738\"" Jan 13 20:17:12.568107 containerd[1570]: time="2025-01-13T20:17:12.568058374Z" level=info msg="StartContainer for \"395765a55352cbc8a9068ba146e23fa218433bef61ab40f2101dd20c036e1738\"" Jan 13 20:17:12.575675 containerd[1570]: time="2025-01-13T20:17:12.575591463Z" level=info msg="CreateContainer within sandbox \"a020e7897fba91baa63e0ffc8a22cd77b02f1ec918fd64a049048476a62b689d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f115a115a7cfcaf1d2f4fc1e0d5957bb78c08b1a3fb03509d903bf991232a50\"" Jan 13 20:17:12.576817 containerd[1570]: time="2025-01-13T20:17:12.576759249Z" level=info msg="StartContainer for \"0f115a115a7cfcaf1d2f4fc1e0d5957bb78c08b1a3fb03509d903bf991232a50\"" Jan 13 20:17:12.579736 containerd[1570]: time="2025-01-13T20:17:12.579700187Z" level=info msg="CreateContainer within sandbox \"e1e6377a221410f95c96ec421a087e6f3ca881e0514f64b1c5ce89e79d916d68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"310cefd68b611257f2f90ebdbed1ad5d21ad6e376217ff99b0dec669753f1bcd\"" Jan 13 20:17:12.580373 containerd[1570]: time="2025-01-13T20:17:12.580334995Z" level=info msg="StartContainer for \"310cefd68b611257f2f90ebdbed1ad5d21ad6e376217ff99b0dec669753f1bcd\"" Jan 13 20:17:12.667938 containerd[1570]: time="2025-01-13T20:17:12.661296026Z" level=info msg="StartContainer for \"0f115a115a7cfcaf1d2f4fc1e0d5957bb78c08b1a3fb03509d903bf991232a50\" returns successfully" Jan 13 20:17:12.667938 containerd[1570]: time="2025-01-13T20:17:12.661374501Z" level=info msg="StartContainer for \"395765a55352cbc8a9068ba146e23fa218433bef61ab40f2101dd20c036e1738\" returns successfully" Jan 13 20:17:12.667938 containerd[1570]: time="2025-01-13T20:17:12.661438595Z" level=info msg="StartContainer for \"310cefd68b611257f2f90ebdbed1ad5d21ad6e376217ff99b0dec669753f1bcd\" returns successfully" Jan 13 20:17:12.668103 kubelet[2435]: W0113 20:17:12.664548 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:12.668103 kubelet[2435]: E0113 20:17:12.664608 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:12.776717 kubelet[2435]: W0113 20:17:12.776627 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:12.776717 kubelet[2435]: E0113 20:17:12.776690 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jan 13 20:17:12.972654 kubelet[2435]: I0113 20:17:12.972551 2435 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:13.496462 kubelet[2435]: E0113 20:17:13.496427 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:13.502465 kubelet[2435]: E0113 20:17:13.502430 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:13.504501 kubelet[2435]: E0113 20:17:13.504473 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.042163 kubelet[2435]: E0113 20:17:14.042127 2435 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:17:14.132481 kubelet[2435]: I0113 20:17:14.132438 2435 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:17:14.146505 kubelet[2435]: E0113 20:17:14.146210 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.247365 kubelet[2435]: E0113 20:17:14.247320 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.347935 kubelet[2435]: E0113 20:17:14.347822 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.448496 kubelet[2435]: E0113 20:17:14.448457 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.503663 kubelet[2435]: E0113 20:17:14.503605 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.549158 kubelet[2435]: E0113 20:17:14.549126 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.649931 kubelet[2435]: E0113 20:17:14.649821 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.750395 kubelet[2435]: E0113 20:17:14.750361 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:14.850944 kubelet[2435]: E0113 20:17:14.850911 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:15.458199 kubelet[2435]: I0113 20:17:15.458084 2435 apiserver.go:52] "Watching apiserver" Jan 13 20:17:15.464911 kubelet[2435]: I0113 20:17:15.464885 2435 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:17:15.511418 kubelet[2435]: E0113 20:17:15.511372 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:16.505304 kubelet[2435]: E0113 20:17:16.505267 2435 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:16.530309 systemd[1]: Reloading requested from client PID 2713 ('systemctl') (unit session-7.scope)... Jan 13 20:17:16.530325 systemd[1]: Reloading... Jan 13 20:17:16.585816 zram_generator::config[2754]: No configuration found. Jan 13 20:17:16.749555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:17:16.804481 systemd[1]: Reloading finished in 273 ms. Jan 13 20:17:16.831295 kubelet[2435]: I0113 20:17:16.831261 2435 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:17:16.831400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:16.838469 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:17:16.838837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:16.847223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:16.929098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:16.934185 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:17:16.977425 kubelet[2804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:16.977425 kubelet[2804]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:17:16.977425 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:16.977425 kubelet[2804]: I0113 20:17:16.977286 2804 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:17:16.981073 kubelet[2804]: I0113 20:17:16.981034 2804 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:17:16.981073 kubelet[2804]: I0113 20:17:16.981061 2804 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:17:16.981264 kubelet[2804]: I0113 20:17:16.981251 2804 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:17:16.982903 kubelet[2804]: I0113 20:17:16.982885 2804 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:17:16.985768 kubelet[2804]: I0113 20:17:16.985670 2804 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:17:16.993905 kubelet[2804]: I0113 20:17:16.993883 2804 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:17:16.994175 sudo[2819]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:17:16.994447 sudo[2819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:17:16.994589 kubelet[2804]: I0113 20:17:16.994260 2804 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:17:16.994589 kubelet[2804]: I0113 20:17:16.994422 2804 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:17:16.994589 kubelet[2804]: I0113 20:17:16.994440 2804 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:17:16.994589 kubelet[2804]: I0113 20:17:16.994449 2804 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:17:16.994589 kubelet[2804]: I0113 20:17:16.994476 2804 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:16.994589 kubelet[2804]: I0113 20:17:16.994565 2804 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:17:16.995294 kubelet[2804]: I0113 20:17:16.994579 2804 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:17:16.995294 kubelet[2804]: I0113 20:17:16.994601 2804 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:17:16.995294 kubelet[2804]: I0113 20:17:16.994615 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:17:16.995575 kubelet[2804]: I0113 20:17:16.995514 2804 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:17:16.996273 kubelet[2804]: I0113 20:17:16.996254 2804 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:17:16.997471 kubelet[2804]: I0113 20:17:16.996603 2804 server.go:1256] "Started kubelet" Jan 13 20:17:16.997471 kubelet[2804]: I0113 20:17:16.997231 2804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:17:16.997471 kubelet[2804]: I0113 20:17:16.997426 2804 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:17:16.997962 kubelet[2804]: I0113 20:17:16.997942 2804 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:17:16.999067 kubelet[2804]: I0113 20:17:16.999048 2804 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:17:16.999992 kubelet[2804]: I0113 20:17:16.999927 2804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:17:17.000180 kubelet[2804]: E0113 20:17:17.000161 2804 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:17:17.001357 kubelet[2804]: E0113 20:17:17.001330 2804 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:17.001417 kubelet[2804]: I0113 20:17:17.001368 2804 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:17:17.003771 kubelet[2804]: I0113 20:17:17.001459 2804 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:17:17.003771 kubelet[2804]: I0113 20:17:17.001581 2804 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:17:17.019203 kubelet[2804]: I0113 20:17:17.019172 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:17:17.021855 kubelet[2804]: I0113 20:17:17.021827 2804 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:17:17.021967 kubelet[2804]: I0113 20:17:17.021938 2804 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:17:17.024187 kubelet[2804]: I0113 20:17:17.024164 2804 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:17:17.027789 kubelet[2804]: I0113 20:17:17.026241 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:17:17.027789 kubelet[2804]: I0113 20:17:17.026266 2804 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:17:17.027789 kubelet[2804]: I0113 20:17:17.026281 2804 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:17:17.027789 kubelet[2804]: E0113 20:17:17.026335 2804 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:17:17.074895 kubelet[2804]: I0113 20:17:17.074797 2804 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:17:17.074895 kubelet[2804]: I0113 20:17:17.074824 2804 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:17:17.074895 kubelet[2804]: I0113 20:17:17.074842 2804 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:17.075027 kubelet[2804]: I0113 20:17:17.074977 2804 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:17:17.075027 kubelet[2804]: I0113 20:17:17.074996 2804 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:17:17.075027 kubelet[2804]: I0113 20:17:17.075002 2804 policy_none.go:49] "None policy: Start" Jan 13 20:17:17.075541 kubelet[2804]: I0113 20:17:17.075519 2804 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:17:17.075577 kubelet[2804]: I0113 20:17:17.075546 2804 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:17:17.075840 kubelet[2804]: I0113 20:17:17.075818 2804 state_mem.go:75] "Updated machine memory state" Jan 13 20:17:17.076907 kubelet[2804]: I0113 20:17:17.076889 2804 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:17:17.077113 kubelet[2804]: I0113 20:17:17.077097 2804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:17:17.105598 kubelet[2804]: I0113 20:17:17.105569 2804 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:17.112930 kubelet[2804]: I0113 20:17:17.112900 2804 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:17:17.113027 kubelet[2804]: I0113 20:17:17.113001 2804 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:17:17.127256 kubelet[2804]: I0113 20:17:17.127177 2804 topology_manager.go:215] "Topology Admit Handler" podUID="4e043ae940ea23cbb8e26e59966bb142" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:17:17.127256 kubelet[2804]: I0113 20:17:17.127258 2804 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:17:17.127380 kubelet[2804]: I0113 20:17:17.127323 2804 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:17:17.133997 kubelet[2804]: E0113 20:17:17.133922 2804 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:17.202470 kubelet[2804]: I0113 20:17:17.202439 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e043ae940ea23cbb8e26e59966bb142-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e043ae940ea23cbb8e26e59966bb142\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:17.202729 kubelet[2804]: I0113 20:17:17.202677 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:17.202729 kubelet[2804]: I0113 20:17:17.202709 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:17.202851 kubelet[2804]: I0113 20:17:17.202790 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e043ae940ea23cbb8e26e59966bb142-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e043ae940ea23cbb8e26e59966bb142\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:17.202851 kubelet[2804]: I0113 20:17:17.202826 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e043ae940ea23cbb8e26e59966bb142-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e043ae940ea23cbb8e26e59966bb142\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:17.202953 kubelet[2804]: I0113 20:17:17.202872 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:17.202953 kubelet[2804]: I0113 20:17:17.202936 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:17.203094 kubelet[2804]: I0113 20:17:17.202975 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:17.203094 kubelet[2804]: I0113 20:17:17.203012 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:17:17.433320 kubelet[2804]: E0113 20:17:17.433015 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:17.433840 kubelet[2804]: E0113 20:17:17.433818 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:17.435790 kubelet[2804]: E0113 20:17:17.435756 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:17.439778 sudo[2819]: pam_unix(sudo:session): session closed for user root Jan 13 20:17:17.995435 kubelet[2804]: I0113 20:17:17.995356 2804 apiserver.go:52] "Watching apiserver" Jan 13 20:17:18.001963 kubelet[2804]: I0113 20:17:18.001929 2804 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:17:18.048522 kubelet[2804]: E0113 20:17:18.047362 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:18.048522 kubelet[2804]: E0113 20:17:18.048338 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:18.049298 kubelet[2804]: E0113 20:17:18.049253 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:18.067669 kubelet[2804]: I0113 20:17:18.067638 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.067564203 podStartE2EDuration="1.067564203s" podCreationTimestamp="2025-01-13 20:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:18.063253976 +0000 UTC m=+1.125958230" watchObservedRunningTime="2025-01-13 20:17:18.067564203 +0000 UTC m=+1.130268457" Jan 13 20:17:18.076572 kubelet[2804]: I0113 20:17:18.076512 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.076373967 podStartE2EDuration="3.076373967s" podCreationTimestamp="2025-01-13 20:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:18.069915473 +0000 UTC m=+1.132619727" watchObservedRunningTime="2025-01-13 20:17:18.076373967 +0000 UTC m=+1.139078222" Jan 13 20:17:18.084401 kubelet[2804]: I0113 20:17:18.084371 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.084334883 podStartE2EDuration="1.084334883s" podCreationTimestamp="2025-01-13 20:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:18.076598472 +0000 UTC m=+1.139302686" watchObservedRunningTime="2025-01-13 20:17:18.084334883 +0000 UTC m=+1.147039137" Jan 13 20:17:18.785247 sudo[1781]: pam_unix(sudo:session): session closed for user root Jan 13 20:17:18.787353 sshd[1780]: Connection closed by 10.0.0.1 port 56318 Jan 13 20:17:18.788189 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:18.792161 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:56318.service: Deactivated successfully. Jan 13 20:17:18.793928 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:17:18.794285 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:17:18.796310 systemd-logind[1553]: Removed session 7. Jan 13 20:17:19.049473 kubelet[2804]: E0113 20:17:19.049363 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:20.364442 kubelet[2804]: E0113 20:17:20.364364 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:24.336513 kubelet[2804]: E0113 20:17:24.336476 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:25.062499 kubelet[2804]: E0113 20:17:25.062454 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:26.059560 kubelet[2804]: E0113 20:17:26.059530 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:26.349426 kubelet[2804]: E0113 20:17:26.349256 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:27.062111 kubelet[2804]: E0113 20:17:27.061999 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:30.371215 kubelet[2804]: E0113 20:17:30.371180 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:31.137452 update_engine[1562]: I20250113 20:17:31.137383 1562 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:31.164274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2889) Jan 13 20:17:31.188845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2890) Jan 13 20:17:31.207862 kubelet[2804]: I0113 20:17:31.207793 2804 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:17:31.208545 kubelet[2804]: I0113 20:17:31.208388 2804 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:17:31.208575 containerd[1570]: time="2025-01-13T20:17:31.208197991Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:17:31.218263 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2890) Jan 13 20:17:32.096915 kubelet[2804]: I0113 20:17:32.096852 2804 topology_manager.go:215] "Topology Admit Handler" podUID="ec31e841-343c-4416-bf2e-9de8c105ba1d" podNamespace="kube-system" podName="kube-proxy-gj4br" Jan 13 20:17:32.098815 kubelet[2804]: I0113 20:17:32.098757 2804 topology_manager.go:215] "Topology Admit Handler" podUID="4b295823-9475-4320-ae5f-8b076682770b" podNamespace="kube-system" podName="cilium-n6nz4" Jan 13 20:17:32.108568 kubelet[2804]: I0113 20:17:32.108477 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec31e841-343c-4416-bf2e-9de8c105ba1d-kube-proxy\") pod \"kube-proxy-gj4br\" (UID: \"ec31e841-343c-4416-bf2e-9de8c105ba1d\") " pod="kube-system/kube-proxy-gj4br" Jan 13 20:17:32.108568 kubelet[2804]: I0113 20:17:32.108516 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wh9\" (UniqueName: \"kubernetes.io/projected/ec31e841-343c-4416-bf2e-9de8c105ba1d-kube-api-access-46wh9\") pod \"kube-proxy-gj4br\" (UID: \"ec31e841-343c-4416-bf2e-9de8c105ba1d\") " pod="kube-system/kube-proxy-gj4br" Jan 13 20:17:32.108568 kubelet[2804]: I0113 20:17:32.108561 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-cgroup\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108777 kubelet[2804]: I0113 20:17:32.108584 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-lib-modules\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108777 kubelet[2804]: I0113 20:17:32.108609 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mm5r\" (UniqueName: \"kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-kube-api-access-8mm5r\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108777 kubelet[2804]: I0113 20:17:32.108634 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec31e841-343c-4416-bf2e-9de8c105ba1d-xtables-lock\") pod \"kube-proxy-gj4br\" (UID: \"ec31e841-343c-4416-bf2e-9de8c105ba1d\") " pod="kube-system/kube-proxy-gj4br" Jan 13 20:17:32.108777 kubelet[2804]: I0113 20:17:32.108655 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b295823-9475-4320-ae5f-8b076682770b-cilium-config-path\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108777 kubelet[2804]: I0113 20:17:32.108675 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-kernel\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108879 kubelet[2804]: I0113 20:17:32.108865 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-run\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108904 kubelet[2804]: I0113 20:17:32.108892 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-bpf-maps\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108927 kubelet[2804]: I0113 20:17:32.108920 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-hostproc\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108947 kubelet[2804]: I0113 20:17:32.108942 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-etc-cni-netd\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108970 kubelet[2804]: I0113 20:17:32.108961 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-net\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.108999 kubelet[2804]: I0113 20:17:32.108981 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-hubble-tls\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.109019 kubelet[2804]: I0113 20:17:32.109012 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cni-path\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.109042 kubelet[2804]: I0113 20:17:32.109033 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b295823-9475-4320-ae5f-8b076682770b-clustermesh-secrets\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.109063 kubelet[2804]: I0113 20:17:32.109053 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec31e841-343c-4416-bf2e-9de8c105ba1d-lib-modules\") pod \"kube-proxy-gj4br\" (UID: \"ec31e841-343c-4416-bf2e-9de8c105ba1d\") " pod="kube-system/kube-proxy-gj4br" Jan 13 20:17:32.109091 kubelet[2804]: I0113 20:17:32.109086 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-xtables-lock\") pod \"cilium-n6nz4\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " pod="kube-system/cilium-n6nz4" Jan 13 20:17:32.310388 kubelet[2804]: I0113 20:17:32.310338 2804 topology_manager.go:215] "Topology Admit Handler" podUID="308bd31c-d04c-43af-990b-126b05bb9db6" podNamespace="kube-system" podName="cilium-operator-5cc964979-vgxnt" Jan 13 20:17:32.403459 kubelet[2804]: E0113 20:17:32.403338 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:32.404933 containerd[1570]: time="2025-01-13T20:17:32.404884412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gj4br,Uid:ec31e841-343c-4416-bf2e-9de8c105ba1d,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:32.410399 kubelet[2804]: E0113 20:17:32.410294 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:32.410746 containerd[1570]: time="2025-01-13T20:17:32.410701673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6nz4,Uid:4b295823-9475-4320-ae5f-8b076682770b,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:32.415339 kubelet[2804]: I0113 20:17:32.415292 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/308bd31c-d04c-43af-990b-126b05bb9db6-cilium-config-path\") pod \"cilium-operator-5cc964979-vgxnt\" (UID: \"308bd31c-d04c-43af-990b-126b05bb9db6\") " pod="kube-system/cilium-operator-5cc964979-vgxnt" Jan 13 20:17:32.415517 kubelet[2804]: I0113 20:17:32.415459 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvpl\" (UniqueName: \"kubernetes.io/projected/308bd31c-d04c-43af-990b-126b05bb9db6-kube-api-access-8pvpl\") pod \"cilium-operator-5cc964979-vgxnt\" (UID: \"308bd31c-d04c-43af-990b-126b05bb9db6\") " pod="kube-system/cilium-operator-5cc964979-vgxnt" Jan 13 20:17:32.428283 containerd[1570]: time="2025-01-13T20:17:32.428210562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:32.428283 containerd[1570]: time="2025-01-13T20:17:32.428257505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:32.428417 containerd[1570]: time="2025-01-13T20:17:32.428268710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:32.428522 containerd[1570]: time="2025-01-13T20:17:32.428340464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:32.434702 containerd[1570]: time="2025-01-13T20:17:32.434477675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:32.434702 containerd[1570]: time="2025-01-13T20:17:32.434536183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:32.434702 containerd[1570]: time="2025-01-13T20:17:32.434551470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:32.434702 containerd[1570]: time="2025-01-13T20:17:32.434644514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:32.463140 containerd[1570]: time="2025-01-13T20:17:32.463102443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gj4br,Uid:ec31e841-343c-4416-bf2e-9de8c105ba1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2667b8a63eb8dca6b16a5ce089bdf98b89803a828096492b5fabac71bf7c0a7\"" Jan 13 20:17:32.465639 containerd[1570]: time="2025-01-13T20:17:32.465120193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6nz4,Uid:4b295823-9475-4320-ae5f-8b076682770b,Namespace:kube-system,Attempt:0,} returns sandbox id \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\"" Jan 13 20:17:32.466155 kubelet[2804]: E0113 20:17:32.466128 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:32.467434 kubelet[2804]: E0113 20:17:32.467377 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:32.471569 containerd[1570]: time="2025-01-13T20:17:32.471523290Z" level=info msg="CreateContainer within sandbox \"a2667b8a63eb8dca6b16a5ce089bdf98b89803a828096492b5fabac71bf7c0a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:17:32.471569 containerd[1570]: time="2025-01-13T20:17:32.471579957Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:17:32.499004 containerd[1570]: time="2025-01-13T20:17:32.498946612Z" level=info msg="CreateContainer within sandbox \"a2667b8a63eb8dca6b16a5ce089bdf98b89803a828096492b5fabac71bf7c0a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"98cbcc621d09eefd4d16dac43a6bf01b27f87434821c9e13e4b4aca9c676fd28\"" Jan 13 20:17:32.499619 containerd[1570]: time="2025-01-13T20:17:32.499553898Z" level=info msg="StartContainer for \"98cbcc621d09eefd4d16dac43a6bf01b27f87434821c9e13e4b4aca9c676fd28\"" Jan 13 20:17:32.552386 containerd[1570]: time="2025-01-13T20:17:32.552318880Z" level=info msg="StartContainer for \"98cbcc621d09eefd4d16dac43a6bf01b27f87434821c9e13e4b4aca9c676fd28\" returns successfully" Jan 13 20:17:32.616217 kubelet[2804]: E0113 20:17:32.616183 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:32.616983 containerd[1570]: time="2025-01-13T20:17:32.616931684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vgxnt,Uid:308bd31c-d04c-43af-990b-126b05bb9db6,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:32.638177 containerd[1570]: time="2025-01-13T20:17:32.637949027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:32.638177 containerd[1570]: time="2025-01-13T20:17:32.638003012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:32.638177 containerd[1570]: time="2025-01-13T20:17:32.638123469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:32.639768 containerd[1570]: time="2025-01-13T20:17:32.638638111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:32.690184 containerd[1570]: time="2025-01-13T20:17:32.689822869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vgxnt,Uid:308bd31c-d04c-43af-990b-126b05bb9db6,Namespace:kube-system,Attempt:0,} returns sandbox id \"98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5\"" Jan 13 20:17:32.692475 kubelet[2804]: E0113 20:17:32.692241 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:33.070489 kubelet[2804]: E0113 20:17:33.070442 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:33.079021 kubelet[2804]: I0113 20:17:33.078773 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gj4br" podStartSLOduration=1.07870908 podStartE2EDuration="1.07870908s" podCreationTimestamp="2025-01-13 20:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:33.078477856 +0000 UTC m=+16.141182110" watchObservedRunningTime="2025-01-13 20:17:33.07870908 +0000 UTC m=+16.141413294" Jan 13 20:17:43.383064 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:34124.service - OpenSSH per-connection server daemon (10.0.0.1:34124). Jan 13 20:17:43.582070 sshd[3179]: Accepted publickey for core from 10.0.0.1 port 34124 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:43.583464 sshd-session[3179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:43.590755 systemd-logind[1553]: New session 8 of user core. Jan 13 20:17:43.604227 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:17:43.752301 sshd[3186]: Connection closed by 10.0.0.1 port 34124 Jan 13 20:17:43.752566 sshd-session[3179]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:43.755650 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:17:43.756650 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:34124.service: Deactivated successfully. Jan 13 20:17:43.760939 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:17:43.763025 systemd-logind[1553]: Removed session 8. Jan 13 20:17:44.823035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551756461.mount: Deactivated successfully. Jan 13 20:17:46.171808 containerd[1570]: time="2025-01-13T20:17:46.171740985Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:46.172738 containerd[1570]: time="2025-01-13T20:17:46.172687029Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651502" Jan 13 20:17:46.173592 containerd[1570]: time="2025-01-13T20:17:46.173567895Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:46.175579 containerd[1570]: time="2025-01-13T20:17:46.175531000Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.703908024s" Jan 13 20:17:46.175579 containerd[1570]: time="2025-01-13T20:17:46.175579612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:17:46.178018 containerd[1570]: time="2025-01-13T20:17:46.177973347Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:17:46.186070 containerd[1570]: time="2025-01-13T20:17:46.186026777Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:17:46.213341 containerd[1570]: time="2025-01-13T20:17:46.213290665Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\"" Jan 13 20:17:46.213969 containerd[1570]: time="2025-01-13T20:17:46.213911265Z" level=info msg="StartContainer for \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\"" Jan 13 20:17:46.262289 containerd[1570]: time="2025-01-13T20:17:46.262242927Z" level=info msg="StartContainer for \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\" returns successfully" Jan 13 20:17:46.445729 containerd[1570]: time="2025-01-13T20:17:46.441945517Z" level=info msg="shim disconnected" id=f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367 namespace=k8s.io Jan 13 20:17:46.445729 containerd[1570]: time="2025-01-13T20:17:46.445648949Z" level=warning msg="cleaning up after shim disconnected" id=f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367 namespace=k8s.io Jan 13 20:17:46.445729 containerd[1570]: time="2025-01-13T20:17:46.445671795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:17:47.119808 kubelet[2804]: E0113 20:17:47.119772 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:47.123402 containerd[1570]: time="2025-01-13T20:17:47.123221954Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:17:47.151568 containerd[1570]: time="2025-01-13T20:17:47.151507767Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\"" Jan 13 20:17:47.152539 containerd[1570]: time="2025-01-13T20:17:47.152352336Z" level=info msg="StartContainer for \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\"" Jan 13 20:17:47.206664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367-rootfs.mount: Deactivated successfully. Jan 13 20:17:47.212556 containerd[1570]: time="2025-01-13T20:17:47.212379218Z" level=info msg="StartContainer for \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\" returns successfully" Jan 13 20:17:47.231915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681937264.mount: Deactivated successfully. Jan 13 20:17:47.236838 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:17:47.237099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:17:47.237201 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:17:47.246281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:17:47.260141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:17:47.264600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d-rootfs.mount: Deactivated successfully. Jan 13 20:17:47.273346 containerd[1570]: time="2025-01-13T20:17:47.273245548Z" level=info msg="shim disconnected" id=5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d namespace=k8s.io Jan 13 20:17:47.273346 containerd[1570]: time="2025-01-13T20:17:47.273330769Z" level=warning msg="cleaning up after shim disconnected" id=5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d namespace=k8s.io Jan 13 20:17:47.273346 containerd[1570]: time="2025-01-13T20:17:47.273351974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:17:47.496601 containerd[1570]: time="2025-01-13T20:17:47.496541867Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138298" Jan 13 20:17:47.498799 containerd[1570]: time="2025-01-13T20:17:47.498757736Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.320733735s" Jan 13 20:17:47.498998 containerd[1570]: time="2025-01-13T20:17:47.498896770Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:17:47.500974 containerd[1570]: time="2025-01-13T20:17:47.500947719Z" level=info msg="CreateContainer within sandbox \"98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:17:47.502611 containerd[1570]: time="2025-01-13T20:17:47.502570241Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:47.503582 containerd[1570]: time="2025-01-13T20:17:47.503286419Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:47.510334 containerd[1570]: time="2025-01-13T20:17:47.510298277Z" level=info msg="CreateContainer within sandbox \"98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\"" Jan 13 20:17:47.510906 containerd[1570]: time="2025-01-13T20:17:47.510879981Z" level=info msg="StartContainer for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\"" Jan 13 20:17:47.560327 containerd[1570]: time="2025-01-13T20:17:47.560288430Z" level=info msg="StartContainer for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" returns successfully" Jan 13 20:17:48.124814 kubelet[2804]: E0113 20:17:48.124783 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:48.132340 kubelet[2804]: E0113 20:17:48.132294 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:48.136951 containerd[1570]: time="2025-01-13T20:17:48.136901539Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:17:48.144441 kubelet[2804]: I0113 20:17:48.144380 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-vgxnt" podStartSLOduration=1.339183971 podStartE2EDuration="16.144335599s" podCreationTimestamp="2025-01-13 20:17:32 +0000 UTC" firstStartedPulling="2025-01-13 20:17:32.694037815 +0000 UTC m=+15.756742069" lastFinishedPulling="2025-01-13 20:17:47.499189483 +0000 UTC m=+30.561893697" observedRunningTime="2025-01-13 20:17:48.144053811 +0000 UTC m=+31.206758065" watchObservedRunningTime="2025-01-13 20:17:48.144335599 +0000 UTC m=+31.207040013" Jan 13 20:17:48.169926 containerd[1570]: time="2025-01-13T20:17:48.169867470Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\"" Jan 13 20:17:48.173920 containerd[1570]: time="2025-01-13T20:17:48.170583121Z" level=info msg="StartContainer for \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\"" Jan 13 20:17:48.245938 containerd[1570]: time="2025-01-13T20:17:48.245891868Z" level=info msg="StartContainer for \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\" returns successfully" Jan 13 20:17:48.289915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2-rootfs.mount: Deactivated successfully. Jan 13 20:17:48.363043 containerd[1570]: time="2025-01-13T20:17:48.362983376Z" level=info msg="shim disconnected" id=28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2 namespace=k8s.io Jan 13 20:17:48.363043 containerd[1570]: time="2025-01-13T20:17:48.363039110Z" level=warning msg="cleaning up after shim disconnected" id=28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2 namespace=k8s.io Jan 13 20:17:48.363043 containerd[1570]: time="2025-01-13T20:17:48.363047031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:17:48.770994 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:34126.service - OpenSSH per-connection server daemon (10.0.0.1:34126). Jan 13 20:17:48.814548 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 34126 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:48.815659 sshd-session[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:48.819195 systemd-logind[1553]: New session 9 of user core. Jan 13 20:17:48.835038 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:17:48.946686 sshd[3464]: Connection closed by 10.0.0.1 port 34126 Jan 13 20:17:48.947225 sshd-session[3461]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:48.950300 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:34126.service: Deactivated successfully. Jan 13 20:17:48.952387 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:17:48.952403 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:17:48.953800 systemd-logind[1553]: Removed session 9. Jan 13 20:17:49.136371 kubelet[2804]: E0113 20:17:49.135699 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:49.136371 kubelet[2804]: E0113 20:17:49.136367 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:49.140916 containerd[1570]: time="2025-01-13T20:17:49.140822325Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:17:49.207846 containerd[1570]: time="2025-01-13T20:17:49.207797060Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\"" Jan 13 20:17:49.208389 containerd[1570]: time="2025-01-13T20:17:49.208352668Z" level=info msg="StartContainer for \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\"" Jan 13 20:17:49.227656 systemd[1]: run-containerd-runc-k8s.io-aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26-runc.GGRDtu.mount: Deactivated successfully. Jan 13 20:17:49.251705 containerd[1570]: time="2025-01-13T20:17:49.251669290Z" level=info msg="StartContainer for \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\" returns successfully" Jan 13 20:17:49.267571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26-rootfs.mount: Deactivated successfully. Jan 13 20:17:49.272920 containerd[1570]: time="2025-01-13T20:17:49.272862993Z" level=info msg="shim disconnected" id=aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26 namespace=k8s.io Jan 13 20:17:49.272920 containerd[1570]: time="2025-01-13T20:17:49.272917926Z" level=warning msg="cleaning up after shim disconnected" id=aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26 namespace=k8s.io Jan 13 20:17:49.273089 containerd[1570]: time="2025-01-13T20:17:49.272928208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:17:49.283972 containerd[1570]: time="2025-01-13T20:17:49.283930234Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:17:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:17:50.139231 kubelet[2804]: E0113 20:17:50.139205 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:50.142871 containerd[1570]: time="2025-01-13T20:17:50.142827005Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:17:50.157822 containerd[1570]: time="2025-01-13T20:17:50.157782153Z" level=info msg="CreateContainer within sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\"" Jan 13 20:17:50.158250 containerd[1570]: time="2025-01-13T20:17:50.158218610Z" level=info msg="StartContainer for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\"" Jan 13 20:17:50.203366 containerd[1570]: time="2025-01-13T20:17:50.203312424Z" level=info msg="StartContainer for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" returns successfully" Jan 13 20:17:50.218418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276940531.mount: Deactivated successfully. Jan 13 20:17:50.371702 kubelet[2804]: I0113 20:17:50.371616 2804 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:17:50.392989 kubelet[2804]: I0113 20:17:50.392882 2804 topology_manager.go:215] "Topology Admit Handler" podUID="38af4efc-b6ae-4334-843c-c05a213be218" podNamespace="kube-system" podName="coredns-76f75df574-tlgqf" Jan 13 20:17:50.393087 kubelet[2804]: I0113 20:17:50.393068 2804 topology_manager.go:215] "Topology Admit Handler" podUID="763610bb-38ea-4864-bc63-7aeb97a0101c" podNamespace="kube-system" podName="coredns-76f75df574-nqkjd" Jan 13 20:17:50.449045 kubelet[2804]: I0113 20:17:50.448995 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwnxr\" (UniqueName: \"kubernetes.io/projected/763610bb-38ea-4864-bc63-7aeb97a0101c-kube-api-access-rwnxr\") pod \"coredns-76f75df574-nqkjd\" (UID: \"763610bb-38ea-4864-bc63-7aeb97a0101c\") " pod="kube-system/coredns-76f75df574-nqkjd" Jan 13 20:17:50.449177 kubelet[2804]: I0113 20:17:50.449060 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38af4efc-b6ae-4334-843c-c05a213be218-config-volume\") pod \"coredns-76f75df574-tlgqf\" (UID: \"38af4efc-b6ae-4334-843c-c05a213be218\") " pod="kube-system/coredns-76f75df574-tlgqf" Jan 13 20:17:50.449177 kubelet[2804]: I0113 20:17:50.449091 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh66q\" (UniqueName: \"kubernetes.io/projected/38af4efc-b6ae-4334-843c-c05a213be218-kube-api-access-fh66q\") pod \"coredns-76f75df574-tlgqf\" (UID: \"38af4efc-b6ae-4334-843c-c05a213be218\") " pod="kube-system/coredns-76f75df574-tlgqf" Jan 13 20:17:50.449177 kubelet[2804]: I0113 20:17:50.449114 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/763610bb-38ea-4864-bc63-7aeb97a0101c-config-volume\") pod \"coredns-76f75df574-nqkjd\" (UID: \"763610bb-38ea-4864-bc63-7aeb97a0101c\") " pod="kube-system/coredns-76f75df574-nqkjd" Jan 13 20:17:50.695909 kubelet[2804]: E0113 20:17:50.695794 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:50.696485 containerd[1570]: time="2025-01-13T20:17:50.696441088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nqkjd,Uid:763610bb-38ea-4864-bc63-7aeb97a0101c,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:50.698417 kubelet[2804]: E0113 20:17:50.698131 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:50.698780 containerd[1570]: time="2025-01-13T20:17:50.698653583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tlgqf,Uid:38af4efc-b6ae-4334-843c-c05a213be218,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:51.144473 kubelet[2804]: E0113 20:17:51.144395 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:52.149372 kubelet[2804]: E0113 20:17:52.149339 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:52.269401 systemd-networkd[1235]: cilium_host: Link UP Jan 13 20:17:52.269515 systemd-networkd[1235]: cilium_net: Link UP Jan 13 20:17:52.269633 systemd-networkd[1235]: cilium_net: Gained carrier Jan 13 20:17:52.269752 systemd-networkd[1235]: cilium_host: Gained carrier Jan 13 20:17:52.349348 systemd-networkd[1235]: cilium_vxlan: Link UP Jan 13 20:17:52.349354 systemd-networkd[1235]: cilium_vxlan: Gained carrier Jan 13 20:17:52.355889 systemd-networkd[1235]: cilium_host: Gained IPv6LL Jan 13 20:17:52.656803 kernel: NET: Registered PF_ALG protocol family Jan 13 20:17:53.100947 systemd-networkd[1235]: cilium_net: Gained IPv6LL Jan 13 20:17:53.152211 kubelet[2804]: E0113 20:17:53.152180 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:53.208636 systemd-networkd[1235]: lxc_health: Link UP Jan 13 20:17:53.218885 systemd-networkd[1235]: lxc_health: Gained carrier Jan 13 20:17:53.357172 systemd-networkd[1235]: lxc967b41143a1c: Link UP Jan 13 20:17:53.371810 kernel: eth0: renamed from tmp2e339 Jan 13 20:17:53.380780 kernel: eth0: renamed from tmp2cb67 Jan 13 20:17:53.389202 systemd-networkd[1235]: lxc1fefdfabab50: Link UP Jan 13 20:17:53.392300 systemd-networkd[1235]: lxc1fefdfabab50: Gained carrier Jan 13 20:17:53.392481 systemd-networkd[1235]: lxc967b41143a1c: Gained carrier Jan 13 20:17:53.955003 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:56328.service - OpenSSH per-connection server daemon (10.0.0.1:56328). Jan 13 20:17:53.999214 systemd-networkd[1235]: cilium_vxlan: Gained IPv6LL Jan 13 20:17:54.000409 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 56328 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:54.001780 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:54.006088 systemd-logind[1553]: New session 10 of user core. Jan 13 20:17:54.013060 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:17:54.155598 sshd[4047]: Connection closed by 10.0.0.1 port 56328 Jan 13 20:17:54.157070 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:54.160129 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:56328.service: Deactivated successfully. Jan 13 20:17:54.162446 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:17:54.162453 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:17:54.164261 systemd-logind[1553]: Removed session 10. Jan 13 20:17:54.434697 kubelet[2804]: E0113 20:17:54.434439 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:54.449916 kubelet[2804]: I0113 20:17:54.449591 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n6nz4" podStartSLOduration=8.741324089999999 podStartE2EDuration="22.449551007s" podCreationTimestamp="2025-01-13 20:17:32 +0000 UTC" firstStartedPulling="2025-01-13 20:17:32.469453475 +0000 UTC m=+15.532157689" lastFinishedPulling="2025-01-13 20:17:46.177680352 +0000 UTC m=+29.240384606" observedRunningTime="2025-01-13 20:17:51.158679567 +0000 UTC m=+34.221383821" watchObservedRunningTime="2025-01-13 20:17:54.449551007 +0000 UTC m=+37.512255261" Jan 13 20:17:54.573304 systemd-networkd[1235]: lxc_health: Gained IPv6LL Jan 13 20:17:54.892140 systemd-networkd[1235]: lxc967b41143a1c: Gained IPv6LL Jan 13 20:17:55.020184 systemd-networkd[1235]: lxc1fefdfabab50: Gained IPv6LL Jan 13 20:17:56.957998 containerd[1570]: time="2025-01-13T20:17:56.956415164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:56.957998 containerd[1570]: time="2025-01-13T20:17:56.956469255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:56.957998 containerd[1570]: time="2025-01-13T20:17:56.956483577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:56.958487 containerd[1570]: time="2025-01-13T20:17:56.958195018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:56.962870 containerd[1570]: time="2025-01-13T20:17:56.962520070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:56.962870 containerd[1570]: time="2025-01-13T20:17:56.962575881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:56.962870 containerd[1570]: time="2025-01-13T20:17:56.962597525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:56.962870 containerd[1570]: time="2025-01-13T20:17:56.962687141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:56.984304 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:17:56.985555 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:17:57.005530 containerd[1570]: time="2025-01-13T20:17:57.005491672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nqkjd,Uid:763610bb-38ea-4864-bc63-7aeb97a0101c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e339a09ee63cce8b6b18c90d010582bb03f87d945404948b294e553d21fb5b7\"" Jan 13 20:17:57.006669 kubelet[2804]: E0113 20:17:57.006650 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:57.010393 containerd[1570]: time="2025-01-13T20:17:57.010331477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tlgqf,Uid:38af4efc-b6ae-4334-843c-c05a213be218,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cb674e5ff3eb79483fa51bf75799c46c5cf09b6d9d811b65817dbac3c52b1dd\"" Jan 13 20:17:57.011019 kubelet[2804]: E0113 20:17:57.011000 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:57.011185 containerd[1570]: time="2025-01-13T20:17:57.011023003Z" level=info msg="CreateContainer within sandbox \"2e339a09ee63cce8b6b18c90d010582bb03f87d945404948b294e553d21fb5b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:17:57.013596 containerd[1570]: time="2025-01-13T20:17:57.013554066Z" level=info msg="CreateContainer within sandbox \"2cb674e5ff3eb79483fa51bf75799c46c5cf09b6d9d811b65817dbac3c52b1dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:17:57.023594 containerd[1570]: time="2025-01-13T20:17:57.023548614Z" level=info msg="CreateContainer within sandbox \"2e339a09ee63cce8b6b18c90d010582bb03f87d945404948b294e553d21fb5b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f6539157e66121177281f7c42f1202baaa68e99be68c2723261f1d63f38b80a\"" Jan 13 20:17:57.024154 containerd[1570]: time="2025-01-13T20:17:57.024096874Z" level=info msg="StartContainer for \"2f6539157e66121177281f7c42f1202baaa68e99be68c2723261f1d63f38b80a\"" Jan 13 20:17:57.024807 containerd[1570]: time="2025-01-13T20:17:57.024760876Z" level=info msg="CreateContainer within sandbox \"2cb674e5ff3eb79483fa51bf75799c46c5cf09b6d9d811b65817dbac3c52b1dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf34d9b94aa82196f35c84474c9d0191bed69843a92443c5f5a10bdc060f0ed8\"" Jan 13 20:17:57.025733 containerd[1570]: time="2025-01-13T20:17:57.025697327Z" level=info msg="StartContainer for \"bf34d9b94aa82196f35c84474c9d0191bed69843a92443c5f5a10bdc060f0ed8\"" Jan 13 20:17:57.080281 containerd[1570]: time="2025-01-13T20:17:57.080166408Z" level=info msg="StartContainer for \"bf34d9b94aa82196f35c84474c9d0191bed69843a92443c5f5a10bdc060f0ed8\" returns successfully" Jan 13 20:17:57.090555 containerd[1570]: time="2025-01-13T20:17:57.088130264Z" level=info msg="StartContainer for \"2f6539157e66121177281f7c42f1202baaa68e99be68c2723261f1d63f38b80a\" returns successfully" Jan 13 20:17:57.162020 kubelet[2804]: E0113 20:17:57.161022 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:57.175575 kubelet[2804]: E0113 20:17:57.171563 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:57.201346 kubelet[2804]: I0113 20:17:57.201301 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tlgqf" podStartSLOduration=25.201259393 podStartE2EDuration="25.201259393s" podCreationTimestamp="2025-01-13 20:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:57.177969134 +0000 UTC m=+40.240673388" watchObservedRunningTime="2025-01-13 20:17:57.201259393 +0000 UTC m=+40.263963647" Jan 13 20:17:57.991716 kubelet[2804]: I0113 20:17:57.991638 2804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:17:57.992976 kubelet[2804]: E0113 20:17:57.992952 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:58.007567 kubelet[2804]: I0113 20:17:58.007453 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nqkjd" podStartSLOduration=26.007413672 podStartE2EDuration="26.007413672s" podCreationTimestamp="2025-01-13 20:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:57.20227934 +0000 UTC m=+40.264983594" watchObservedRunningTime="2025-01-13 20:17:58.007413672 +0000 UTC m=+41.070117886" Jan 13 20:17:58.173856 kubelet[2804]: E0113 20:17:58.173720 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:58.175276 kubelet[2804]: E0113 20:17:58.174989 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:58.178303 kubelet[2804]: E0113 20:17:58.178279 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:59.175116 kubelet[2804]: E0113 20:17:59.175040 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:59.175116 kubelet[2804]: E0113 20:17:59.175088 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:59.177029 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Jan 13 20:17:59.224983 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:59.226695 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:59.230552 systemd-logind[1553]: New session 11 of user core. Jan 13 20:17:59.241076 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:17:59.358935 sshd[4243]: Connection closed by 10.0.0.1 port 56332 Jan 13 20:17:59.359246 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:59.374173 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:56334.service - OpenSSH per-connection server daemon (10.0.0.1:56334). Jan 13 20:17:59.374610 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:56332.service: Deactivated successfully. Jan 13 20:17:59.376320 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:17:59.377578 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:17:59.379378 systemd-logind[1553]: Removed session 11. Jan 13 20:17:59.420430 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 56334 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:59.421808 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:59.426082 systemd-logind[1553]: New session 12 of user core. Jan 13 20:17:59.435081 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:17:59.617617 sshd[4264]: Connection closed by 10.0.0.1 port 56334 Jan 13 20:17:59.618602 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:59.627064 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:56342.service - OpenSSH per-connection server daemon (10.0.0.1:56342). Jan 13 20:17:59.627502 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:56334.service: Deactivated successfully. Jan 13 20:17:59.631927 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:17:59.632001 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:17:59.634183 systemd-logind[1553]: Removed session 12. Jan 13 20:17:59.680053 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 56342 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:59.681705 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:59.689309 systemd-logind[1553]: New session 13 of user core. Jan 13 20:17:59.699126 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:17:59.856504 sshd[4277]: Connection closed by 10.0.0.1 port 56342 Jan 13 20:17:59.856929 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:59.860150 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:56342.service: Deactivated successfully. Jan 13 20:17:59.862687 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:17:59.863261 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:17:59.864024 systemd-logind[1553]: Removed session 13. Jan 13 20:18:04.868069 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:48202.service - OpenSSH per-connection server daemon (10.0.0.1:48202). Jan 13 20:18:04.913294 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 48202 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:04.914709 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:04.919547 systemd-logind[1553]: New session 14 of user core. Jan 13 20:18:04.929065 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:18:05.064430 sshd[4294]: Connection closed by 10.0.0.1 port 48202 Jan 13 20:18:05.064772 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:05.067480 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:18:05.067598 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:48202.service: Deactivated successfully. Jan 13 20:18:05.070198 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:18:05.071360 systemd-logind[1553]: Removed session 14. Jan 13 20:18:10.080081 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:48208.service - OpenSSH per-connection server daemon (10.0.0.1:48208). Jan 13 20:18:10.126236 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 48208 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:10.127590 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:10.132833 systemd-logind[1553]: New session 15 of user core. Jan 13 20:18:10.142140 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:18:10.267789 sshd[4309]: Connection closed by 10.0.0.1 port 48208 Jan 13 20:18:10.268390 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:10.279060 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:48210.service - OpenSSH per-connection server daemon (10.0.0.1:48210). Jan 13 20:18:10.279474 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:48208.service: Deactivated successfully. Jan 13 20:18:10.283655 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:18:10.284379 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:18:10.286734 systemd-logind[1553]: Removed session 15. Jan 13 20:18:10.322584 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 48210 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:10.324075 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:10.329593 systemd-logind[1553]: New session 16 of user core. Jan 13 20:18:10.337107 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:18:10.549006 sshd[4325]: Connection closed by 10.0.0.1 port 48210 Jan 13 20:18:10.552110 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:10.560049 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Jan 13 20:18:10.560885 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:48210.service: Deactivated successfully. Jan 13 20:18:10.564331 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:18:10.565270 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:18:10.566436 systemd-logind[1553]: Removed session 16. Jan 13 20:18:10.609312 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:10.610993 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:10.615772 systemd-logind[1553]: New session 17 of user core. Jan 13 20:18:10.627097 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:18:11.839707 sshd[4338]: Connection closed by 10.0.0.1 port 48216 Jan 13 20:18:11.841089 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:11.848995 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:48348.service - OpenSSH per-connection server daemon (10.0.0.1:48348). Jan 13 20:18:11.849404 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:48216.service: Deactivated successfully. Jan 13 20:18:11.858142 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:18:11.859285 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:18:11.865099 systemd-logind[1553]: Removed session 17. Jan 13 20:18:11.895104 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 48348 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:11.896497 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:11.902074 systemd-logind[1553]: New session 18 of user core. Jan 13 20:18:11.916113 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:18:12.136933 sshd[4360]: Connection closed by 10.0.0.1 port 48348 Jan 13 20:18:12.137181 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:12.144978 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:48356.service - OpenSSH per-connection server daemon (10.0.0.1:48356). Jan 13 20:18:12.145352 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:48348.service: Deactivated successfully. Jan 13 20:18:12.148997 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:18:12.149160 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:18:12.153094 systemd-logind[1553]: Removed session 18. Jan 13 20:18:12.186177 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 48356 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:12.187414 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:12.191628 systemd-logind[1553]: New session 19 of user core. Jan 13 20:18:12.202009 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:18:12.316213 sshd[4373]: Connection closed by 10.0.0.1 port 48356 Jan 13 20:18:12.316561 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:12.319864 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:48356.service: Deactivated successfully. Jan 13 20:18:12.321847 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:18:12.321860 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:18:12.323086 systemd-logind[1553]: Removed session 19. Jan 13 20:18:17.335008 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:47424.service - OpenSSH per-connection server daemon (10.0.0.1:47424). Jan 13 20:18:17.376128 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 47424 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:17.377498 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:17.381068 systemd-logind[1553]: New session 20 of user core. Jan 13 20:18:17.391065 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:18:17.499377 sshd[4393]: Connection closed by 10.0.0.1 port 47424 Jan 13 20:18:17.499713 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:17.503197 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:47424.service: Deactivated successfully. Jan 13 20:18:17.505046 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:18:17.505199 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:18:17.506073 systemd-logind[1553]: Removed session 20. Jan 13 20:18:22.510991 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:53568.service - OpenSSH per-connection server daemon (10.0.0.1:53568). Jan 13 20:18:22.551592 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 53568 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:22.552733 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:22.557191 systemd-logind[1553]: New session 21 of user core. Jan 13 20:18:22.566980 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:18:22.675631 sshd[4409]: Connection closed by 10.0.0.1 port 53568 Jan 13 20:18:22.675958 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:22.679451 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:53568.service: Deactivated successfully. Jan 13 20:18:22.681395 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:18:22.681579 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:18:22.682643 systemd-logind[1553]: Removed session 21. Jan 13 20:18:27.688002 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:53572.service - OpenSSH per-connection server daemon (10.0.0.1:53572). Jan 13 20:18:27.729249 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 53572 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:27.730370 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:27.733994 systemd-logind[1553]: New session 22 of user core. Jan 13 20:18:27.741008 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:18:27.849727 sshd[4425]: Connection closed by 10.0.0.1 port 53572 Jan 13 20:18:27.850212 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:27.861035 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:53588.service - OpenSSH per-connection server daemon (10.0.0.1:53588). Jan 13 20:18:27.861398 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:53572.service: Deactivated successfully. Jan 13 20:18:27.863759 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:18:27.863841 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:18:27.865478 systemd-logind[1553]: Removed session 22. Jan 13 20:18:27.904974 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:27.906085 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:27.909678 systemd-logind[1553]: New session 23 of user core. Jan 13 20:18:27.913991 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:18:29.677811 containerd[1570]: time="2025-01-13T20:18:29.677703775Z" level=info msg="StopContainer for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" with timeout 30 (s)" Jan 13 20:18:29.679496 containerd[1570]: time="2025-01-13T20:18:29.678538983Z" level=info msg="Stop container \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" with signal terminated" Jan 13 20:18:29.705255 systemd[1]: run-containerd-runc-k8s.io-4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066-runc.Kt0XeZ.mount: Deactivated successfully. Jan 13 20:18:29.718551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5-rootfs.mount: Deactivated successfully. Jan 13 20:18:29.727800 containerd[1570]: time="2025-01-13T20:18:29.727698410Z" level=info msg="StopContainer for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" with timeout 2 (s)" Jan 13 20:18:29.727972 containerd[1570]: time="2025-01-13T20:18:29.727917911Z" level=info msg="Stop container \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" with signal terminated" Jan 13 20:18:29.729210 containerd[1570]: time="2025-01-13T20:18:29.729158244Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:18:29.733975 systemd-networkd[1235]: lxc_health: Link DOWN Jan 13 20:18:29.733980 systemd-networkd[1235]: lxc_health: Lost carrier Jan 13 20:18:29.742845 containerd[1570]: time="2025-01-13T20:18:29.742791904Z" level=info msg="shim disconnected" id=41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5 namespace=k8s.io Jan 13 20:18:29.742845 containerd[1570]: time="2025-01-13T20:18:29.742846019Z" level=warning msg="cleaning up after shim disconnected" id=41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5 namespace=k8s.io Jan 13 20:18:29.743013 containerd[1570]: time="2025-01-13T20:18:29.742855418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:29.775429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066-rootfs.mount: Deactivated successfully. Jan 13 20:18:29.781254 containerd[1570]: time="2025-01-13T20:18:29.781196381Z" level=info msg="shim disconnected" id=4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066 namespace=k8s.io Jan 13 20:18:29.782066 containerd[1570]: time="2025-01-13T20:18:29.782041148Z" level=warning msg="cleaning up after shim disconnected" id=4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066 namespace=k8s.io Jan 13 20:18:29.782164 containerd[1570]: time="2025-01-13T20:18:29.782149339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:29.790848 containerd[1570]: time="2025-01-13T20:18:29.790806030Z" level=info msg="StopContainer for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" returns successfully" Jan 13 20:18:29.793830 containerd[1570]: time="2025-01-13T20:18:29.793791971Z" level=info msg="StopPodSandbox for \"98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5\"" Jan 13 20:18:29.796115 containerd[1570]: time="2025-01-13T20:18:29.796081733Z" level=info msg="StopContainer for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" returns successfully" Jan 13 20:18:29.796492 containerd[1570]: time="2025-01-13T20:18:29.796462460Z" level=info msg="StopPodSandbox for \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\"" Jan 13 20:18:29.796719 containerd[1570]: time="2025-01-13T20:18:29.796591969Z" level=info msg="Container to stop \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:18:29.796719 containerd[1570]: time="2025-01-13T20:18:29.796610847Z" level=info msg="Container to stop \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:18:29.796719 containerd[1570]: time="2025-01-13T20:18:29.796620767Z" level=info msg="Container to stop \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:18:29.796719 containerd[1570]: time="2025-01-13T20:18:29.796629046Z" level=info msg="Container to stop \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:18:29.796719 containerd[1570]: time="2025-01-13T20:18:29.796637805Z" level=info msg="Container to stop \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:18:29.798478 containerd[1570]: time="2025-01-13T20:18:29.798426650Z" level=info msg="Container to stop \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:18:29.798492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739-shm.mount: Deactivated successfully. Jan 13 20:18:29.831691 containerd[1570]: time="2025-01-13T20:18:29.831481230Z" level=info msg="shim disconnected" id=98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5 namespace=k8s.io Jan 13 20:18:29.831691 containerd[1570]: time="2025-01-13T20:18:29.831535546Z" level=warning msg="cleaning up after shim disconnected" id=98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5 namespace=k8s.io Jan 13 20:18:29.831691 containerd[1570]: time="2025-01-13T20:18:29.831543585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:29.832045 containerd[1570]: time="2025-01-13T20:18:29.831911033Z" level=info msg="shim disconnected" id=37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739 namespace=k8s.io Jan 13 20:18:29.832045 containerd[1570]: time="2025-01-13T20:18:29.831952230Z" level=warning msg="cleaning up after shim disconnected" id=37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739 namespace=k8s.io Jan 13 20:18:29.832045 containerd[1570]: time="2025-01-13T20:18:29.831960829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:29.845946 containerd[1570]: time="2025-01-13T20:18:29.845895183Z" level=info msg="TearDown network for sandbox \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" successfully" Jan 13 20:18:29.845946 containerd[1570]: time="2025-01-13T20:18:29.845931620Z" level=info msg="StopPodSandbox for \"37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739\" returns successfully" Jan 13 20:18:29.849806 containerd[1570]: time="2025-01-13T20:18:29.849779127Z" level=info msg="TearDown network for sandbox \"98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5\" successfully" Jan 13 20:18:29.849918 containerd[1570]: time="2025-01-13T20:18:29.849903436Z" level=info msg="StopPodSandbox for \"98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5\" returns successfully" Jan 13 20:18:29.966976 kubelet[2804]: I0113 20:18:29.966942 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cni-path\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967811 kubelet[2804]: I0113 20:18:29.966991 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/308bd31c-d04c-43af-990b-126b05bb9db6-cilium-config-path\") pod \"308bd31c-d04c-43af-990b-126b05bb9db6\" (UID: \"308bd31c-d04c-43af-990b-126b05bb9db6\") " Jan 13 20:18:29.967811 kubelet[2804]: I0113 20:18:29.967016 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-hubble-tls\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967811 kubelet[2804]: I0113 20:18:29.967034 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-net\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967811 kubelet[2804]: I0113 20:18:29.967055 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b295823-9475-4320-ae5f-8b076682770b-clustermesh-secrets\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967811 kubelet[2804]: I0113 20:18:29.967073 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-hostproc\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967811 kubelet[2804]: I0113 20:18:29.967093 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pvpl\" (UniqueName: \"kubernetes.io/projected/308bd31c-d04c-43af-990b-126b05bb9db6-kube-api-access-8pvpl\") pod \"308bd31c-d04c-43af-990b-126b05bb9db6\" (UID: \"308bd31c-d04c-43af-990b-126b05bb9db6\") " Jan 13 20:18:29.967963 kubelet[2804]: I0113 20:18:29.967114 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-cgroup\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967963 kubelet[2804]: I0113 20:18:29.967132 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-xtables-lock\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967963 kubelet[2804]: I0113 20:18:29.967149 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-bpf-maps\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967963 kubelet[2804]: I0113 20:18:29.967166 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-etc-cni-netd\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967963 kubelet[2804]: I0113 20:18:29.967190 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mm5r\" (UniqueName: \"kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-kube-api-access-8mm5r\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.967963 kubelet[2804]: I0113 20:18:29.967209 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-run\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.968096 kubelet[2804]: I0113 20:18:29.967229 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-kernel\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.971218 kubelet[2804]: I0113 20:18:29.971185 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cni-path" (OuterVolumeSpecName: "cni-path") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.971615 kubelet[2804]: I0113 20:18:29.971302 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.972195 kubelet[2804]: I0113 20:18:29.972165 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/308bd31c-d04c-43af-990b-126b05bb9db6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "308bd31c-d04c-43af-990b-126b05bb9db6" (UID: "308bd31c-d04c-43af-990b-126b05bb9db6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:18:29.972320 kubelet[2804]: I0113 20:18:29.972280 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.974263 kubelet[2804]: I0113 20:18:29.974241 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-hostproc" (OuterVolumeSpecName: "hostproc") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.974459 kubelet[2804]: I0113 20:18:29.974429 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.975413 kubelet[2804]: I0113 20:18:29.975246 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.975413 kubelet[2804]: I0113 20:18:29.975382 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.975413 kubelet[2804]: I0113 20:18:29.975404 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.975541 kubelet[2804]: I0113 20:18:29.975509 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.976497 kubelet[2804]: I0113 20:18:29.976459 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b295823-9475-4320-ae5f-8b076682770b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:18:29.976497 kubelet[2804]: I0113 20:18:29.976474 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:18:29.976497 kubelet[2804]: I0113 20:18:29.976458 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308bd31c-d04c-43af-990b-126b05bb9db6-kube-api-access-8pvpl" (OuterVolumeSpecName: "kube-api-access-8pvpl") pod "308bd31c-d04c-43af-990b-126b05bb9db6" (UID: "308bd31c-d04c-43af-990b-126b05bb9db6"). InnerVolumeSpecName "kube-api-access-8pvpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:18:29.977050 kubelet[2804]: I0113 20:18:29.977030 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:18:29.977082 kubelet[2804]: I0113 20:18:29.977064 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-lib-modules\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.977112 kubelet[2804]: I0113 20:18:29.977102 2804 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b295823-9475-4320-ae5f-8b076682770b-cilium-config-path\") pod \"4b295823-9475-4320-ae5f-8b076682770b\" (UID: \"4b295823-9475-4320-ae5f-8b076682770b\") " Jan 13 20:18:29.977151 kubelet[2804]: I0113 20:18:29.977140 2804 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977182 kubelet[2804]: I0113 20:18:29.977157 2804 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977182 kubelet[2804]: I0113 20:18:29.977167 2804 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977182 kubelet[2804]: I0113 20:18:29.977178 2804 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977187 2804 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/308bd31c-d04c-43af-990b-126b05bb9db6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977198 2804 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977207 2804 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977223 2804 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b295823-9475-4320-ae5f-8b076682770b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977234 2804 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8pvpl\" (UniqueName: \"kubernetes.io/projected/308bd31c-d04c-43af-990b-126b05bb9db6-kube-api-access-8pvpl\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977244 2804 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977253 2804 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977265 kubelet[2804]: I0113 20:18:29.977262 2804 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977447 kubelet[2804]: I0113 20:18:29.977271 2804 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977447 kubelet[2804]: I0113 20:18:29.977280 2804 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b295823-9475-4320-ae5f-8b076682770b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:29.977638 kubelet[2804]: I0113 20:18:29.977605 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-kube-api-access-8mm5r" (OuterVolumeSpecName: "kube-api-access-8mm5r") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "kube-api-access-8mm5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:18:29.979185 kubelet[2804]: I0113 20:18:29.979160 2804 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b295823-9475-4320-ae5f-8b076682770b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b295823-9475-4320-ae5f-8b076682770b" (UID: "4b295823-9475-4320-ae5f-8b076682770b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:18:30.078347 kubelet[2804]: I0113 20:18:30.078300 2804 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8mm5r\" (UniqueName: \"kubernetes.io/projected/4b295823-9475-4320-ae5f-8b076682770b-kube-api-access-8mm5r\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:30.078347 kubelet[2804]: I0113 20:18:30.078341 2804 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b295823-9475-4320-ae5f-8b076682770b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:18:30.242016 kubelet[2804]: I0113 20:18:30.241428 2804 scope.go:117] "RemoveContainer" containerID="4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066" Jan 13 20:18:30.244379 containerd[1570]: time="2025-01-13T20:18:30.243926720Z" level=info msg="RemoveContainer for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\"" Jan 13 20:18:30.247676 containerd[1570]: time="2025-01-13T20:18:30.247631020Z" level=info msg="RemoveContainer for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" returns successfully" Jan 13 20:18:30.248025 kubelet[2804]: I0113 20:18:30.247878 2804 scope.go:117] "RemoveContainer" containerID="aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26" Jan 13 20:18:30.250359 containerd[1570]: time="2025-01-13T20:18:30.250318122Z" level=info msg="RemoveContainer for \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\"" Jan 13 20:18:30.256394 containerd[1570]: time="2025-01-13T20:18:30.256341953Z" level=info msg="RemoveContainer for \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\" returns successfully" Jan 13 20:18:30.256669 kubelet[2804]: I0113 20:18:30.256588 2804 scope.go:117] "RemoveContainer" containerID="28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2" Jan 13 20:18:30.259455 containerd[1570]: time="2025-01-13T20:18:30.258950101Z" level=info msg="RemoveContainer for \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\"" Jan 13 20:18:30.262198 containerd[1570]: time="2025-01-13T20:18:30.262163480Z" level=info msg="RemoveContainer for \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\" returns successfully" Jan 13 20:18:30.262552 kubelet[2804]: I0113 20:18:30.262447 2804 scope.go:117] "RemoveContainer" containerID="5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d" Jan 13 20:18:30.263908 containerd[1570]: time="2025-01-13T20:18:30.263884180Z" level=info msg="RemoveContainer for \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\"" Jan 13 20:18:30.267478 containerd[1570]: time="2025-01-13T20:18:30.267350819Z" level=info msg="RemoveContainer for \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\" returns successfully" Jan 13 20:18:30.267558 kubelet[2804]: I0113 20:18:30.267526 2804 scope.go:117] "RemoveContainer" containerID="f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367" Jan 13 20:18:30.268701 containerd[1570]: time="2025-01-13T20:18:30.268482647Z" level=info msg="RemoveContainer for \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\"" Jan 13 20:18:30.270589 containerd[1570]: time="2025-01-13T20:18:30.270502123Z" level=info msg="RemoveContainer for \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\" returns successfully" Jan 13 20:18:30.270692 kubelet[2804]: I0113 20:18:30.270657 2804 scope.go:117] "RemoveContainer" containerID="4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066" Jan 13 20:18:30.270927 containerd[1570]: time="2025-01-13T20:18:30.270891491Z" level=error msg="ContainerStatus for \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\": not found" Jan 13 20:18:30.277854 kubelet[2804]: E0113 20:18:30.277833 2804 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\": not found" containerID="4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066" Jan 13 20:18:30.281374 kubelet[2804]: I0113 20:18:30.281343 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066"} err="failed to get container status \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e231b75939f0c08ae289ac3755591b0805892296401c38bc50d4af762dd3066\": not found" Jan 13 20:18:30.281448 kubelet[2804]: I0113 20:18:30.281390 2804 scope.go:117] "RemoveContainer" containerID="aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26" Jan 13 20:18:30.281920 containerd[1570]: time="2025-01-13T20:18:30.281815085Z" level=error msg="ContainerStatus for \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\": not found" Jan 13 20:18:30.282164 kubelet[2804]: E0113 20:18:30.282094 2804 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\": not found" containerID="aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26" Jan 13 20:18:30.282164 kubelet[2804]: I0113 20:18:30.282126 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26"} err="failed to get container status \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\": rpc error: code = NotFound desc = an error occurred when try to find container \"aec5833bbbbf24ea66ce84e83f648d351e2d767efad20f65dacbaf33805eac26\": not found" Jan 13 20:18:30.282164 kubelet[2804]: I0113 20:18:30.282137 2804 scope.go:117] "RemoveContainer" containerID="28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2" Jan 13 20:18:30.282487 containerd[1570]: time="2025-01-13T20:18:30.282452193Z" level=error msg="ContainerStatus for \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\": not found" Jan 13 20:18:30.282586 kubelet[2804]: E0113 20:18:30.282572 2804 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\": not found" containerID="28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2" Jan 13 20:18:30.282625 kubelet[2804]: I0113 20:18:30.282603 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2"} err="failed to get container status \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\": rpc error: code = NotFound desc = an error occurred when try to find container \"28dddb3d81280a394c8e783cd2862c3e3915ead925d95636e58582ee2aad0ab2\": not found" Jan 13 20:18:30.282625 kubelet[2804]: I0113 20:18:30.282613 2804 scope.go:117] "RemoveContainer" containerID="5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d" Jan 13 20:18:30.282819 containerd[1570]: time="2025-01-13T20:18:30.282786766Z" level=error msg="ContainerStatus for \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\": not found" Jan 13 20:18:30.282899 kubelet[2804]: E0113 20:18:30.282887 2804 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\": not found" containerID="5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d" Jan 13 20:18:30.282948 kubelet[2804]: I0113 20:18:30.282908 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d"} err="failed to get container status \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a4a17a7f5a83cf98aa361574689c6629e88a4c616e125bea9dd97171a29f71d\": not found" Jan 13 20:18:30.282948 kubelet[2804]: I0113 20:18:30.282916 2804 scope.go:117] "RemoveContainer" containerID="f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367" Jan 13 20:18:30.283147 containerd[1570]: time="2025-01-13T20:18:30.283117099Z" level=error msg="ContainerStatus for \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\": not found" Jan 13 20:18:30.283305 kubelet[2804]: E0113 20:18:30.283238 2804 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\": not found" containerID="f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367" Jan 13 20:18:30.283440 kubelet[2804]: I0113 20:18:30.283371 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367"} err="failed to get container status \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5fd1f1c129c85aac33b0b5badfb36b3c1efa5b0ba9734e8003fe525f95ba367\": not found" Jan 13 20:18:30.283440 kubelet[2804]: I0113 20:18:30.283391 2804 scope.go:117] "RemoveContainer" containerID="41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5" Jan 13 20:18:30.284935 containerd[1570]: time="2025-01-13T20:18:30.284910993Z" level=info msg="RemoveContainer for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\"" Jan 13 20:18:30.292250 containerd[1570]: time="2025-01-13T20:18:30.292205841Z" level=info msg="RemoveContainer for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" returns successfully" Jan 13 20:18:30.292621 kubelet[2804]: I0113 20:18:30.292533 2804 scope.go:117] "RemoveContainer" containerID="41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5" Jan 13 20:18:30.293071 containerd[1570]: time="2025-01-13T20:18:30.293036654Z" level=error msg="ContainerStatus for \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\": not found" Jan 13 20:18:30.293335 kubelet[2804]: E0113 20:18:30.293275 2804 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\": not found" containerID="41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5" Jan 13 20:18:30.293335 kubelet[2804]: I0113 20:18:30.293310 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5"} err="failed to get container status \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"41b798a6ab52bbd9da921cd06515a35b32b17fb785797b3bcad7b522493af7b5\": not found" Jan 13 20:18:30.702226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5-rootfs.mount: Deactivated successfully. Jan 13 20:18:30.702383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98f896ef5269b3669dc7c9dc8d8ff6423e56569d7596156fae9b238c6a45e2d5-shm.mount: Deactivated successfully. Jan 13 20:18:30.702473 systemd[1]: var-lib-kubelet-pods-308bd31c\x2dd04c\x2d43af\x2d990b\x2d126b05bb9db6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8pvpl.mount: Deactivated successfully. Jan 13 20:18:30.702559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37440d477a73a18c145adb0c50de611b75e42e02e8138321ff4623e12192e739-rootfs.mount: Deactivated successfully. Jan 13 20:18:30.702642 systemd[1]: var-lib-kubelet-pods-4b295823\x2d9475\x2d4320\x2dae5f\x2d8b076682770b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8mm5r.mount: Deactivated successfully. Jan 13 20:18:30.702722 systemd[1]: var-lib-kubelet-pods-4b295823\x2d9475\x2d4320\x2dae5f\x2d8b076682770b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:18:30.702816 systemd[1]: var-lib-kubelet-pods-4b295823\x2d9475\x2d4320\x2dae5f\x2d8b076682770b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:18:31.032659 kubelet[2804]: I0113 20:18:31.031831 2804 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="308bd31c-d04c-43af-990b-126b05bb9db6" path="/var/lib/kubelet/pods/308bd31c-d04c-43af-990b-126b05bb9db6/volumes" Jan 13 20:18:31.032659 kubelet[2804]: I0113 20:18:31.032211 2804 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4b295823-9475-4320-ae5f-8b076682770b" path="/var/lib/kubelet/pods/4b295823-9475-4320-ae5f-8b076682770b/volumes" Jan 13 20:18:31.641824 sshd[4440]: Connection closed by 10.0.0.1 port 53588 Jan 13 20:18:31.642267 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:31.647980 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Jan 13 20:18:31.648361 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:53588.service: Deactivated successfully. Jan 13 20:18:31.650742 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:18:31.651637 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:18:31.652572 systemd-logind[1553]: Removed session 23. Jan 13 20:18:31.688302 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:31.689322 sshd-session[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:31.692816 systemd-logind[1553]: New session 24 of user core. Jan 13 20:18:31.699985 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:18:32.109043 kubelet[2804]: E0113 20:18:32.109005 2804 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:18:33.524699 sshd[4614]: Connection closed by 10.0.0.1 port 53764 Jan 13 20:18:33.525077 sshd-session[4608]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:33.534376 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:40456.service - OpenSSH per-connection server daemon (10.0.0.1:40456). Jan 13 20:18:33.534919 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:53764.service: Deactivated successfully. Jan 13 20:18:33.541236 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:18:33.554539 systemd-logind[1553]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:18:33.556337 kubelet[2804]: I0113 20:18:33.556137 2804 topology_manager.go:215] "Topology Admit Handler" podUID="0858afd8-c1b1-4d47-b60a-6476f6a47d3f" podNamespace="kube-system" podName="cilium-29q52" Jan 13 20:18:33.556337 kubelet[2804]: E0113 20:18:33.556204 2804 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b295823-9475-4320-ae5f-8b076682770b" containerName="mount-cgroup" Jan 13 20:18:33.556337 kubelet[2804]: E0113 20:18:33.556215 2804 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="308bd31c-d04c-43af-990b-126b05bb9db6" containerName="cilium-operator" Jan 13 20:18:33.556337 kubelet[2804]: E0113 20:18:33.556223 2804 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b295823-9475-4320-ae5f-8b076682770b" containerName="apply-sysctl-overwrites" Jan 13 20:18:33.556337 kubelet[2804]: E0113 20:18:33.556229 2804 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b295823-9475-4320-ae5f-8b076682770b" containerName="mount-bpf-fs" Jan 13 20:18:33.556337 kubelet[2804]: E0113 20:18:33.556235 2804 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b295823-9475-4320-ae5f-8b076682770b" containerName="clean-cilium-state" Jan 13 20:18:33.556337 kubelet[2804]: E0113 20:18:33.556241 2804 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b295823-9475-4320-ae5f-8b076682770b" containerName="cilium-agent" Jan 13 20:18:33.562249 systemd-logind[1553]: Removed session 24. Jan 13 20:18:33.565789 kubelet[2804]: I0113 20:18:33.565710 2804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b295823-9475-4320-ae5f-8b076682770b" containerName="cilium-agent" Jan 13 20:18:33.565789 kubelet[2804]: I0113 20:18:33.565754 2804 memory_manager.go:354] "RemoveStaleState removing state" podUID="308bd31c-d04c-43af-990b-126b05bb9db6" containerName="cilium-operator" Jan 13 20:18:33.595041 kubelet[2804]: I0113 20:18:33.595005 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-cni-path\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595171 kubelet[2804]: I0113 20:18:33.595053 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-xtables-lock\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595171 kubelet[2804]: I0113 20:18:33.595077 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-host-proc-sys-net\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595171 kubelet[2804]: I0113 20:18:33.595133 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-cilium-ipsec-secrets\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595246 kubelet[2804]: I0113 20:18:33.595178 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-hubble-tls\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595246 kubelet[2804]: I0113 20:18:33.595239 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-lib-modules\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595308 kubelet[2804]: I0113 20:18:33.595290 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-bpf-maps\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595337 kubelet[2804]: I0113 20:18:33.595320 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-cilium-config-path\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595382 kubelet[2804]: I0113 20:18:33.595373 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-etc-cni-netd\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595410 kubelet[2804]: I0113 20:18:33.595397 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-clustermesh-secrets\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595454 kubelet[2804]: I0113 20:18:33.595416 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df8hc\" (UniqueName: \"kubernetes.io/projected/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-kube-api-access-df8hc\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595478 kubelet[2804]: I0113 20:18:33.595472 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-hostproc\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595499 kubelet[2804]: I0113 20:18:33.595491 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-cilium-cgroup\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595519 kubelet[2804]: I0113 20:18:33.595509 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-cilium-run\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.595580 kubelet[2804]: I0113 20:18:33.595568 2804 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0858afd8-c1b1-4d47-b60a-6476f6a47d3f-host-proc-sys-kernel\") pod \"cilium-29q52\" (UID: \"0858afd8-c1b1-4d47-b60a-6476f6a47d3f\") " pod="kube-system/cilium-29q52" Jan 13 20:18:33.603610 sshd[4625]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:33.604841 sshd-session[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:33.608482 systemd-logind[1553]: New session 25 of user core. Jan 13 20:18:33.617057 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:18:33.667239 sshd[4630]: Connection closed by 10.0.0.1 port 40456 Jan 13 20:18:33.667718 sshd-session[4625]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:33.676969 systemd[1]: Started sshd@25-10.0.0.83:22-10.0.0.1:40462.service - OpenSSH per-connection server daemon (10.0.0.1:40462). Jan 13 20:18:33.677359 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:40456.service: Deactivated successfully. Jan 13 20:18:33.680365 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:18:33.681451 systemd-logind[1553]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:18:33.682697 systemd-logind[1553]: Removed session 25. Jan 13 20:18:33.724327 sshd[4633]: Accepted publickey for core from 10.0.0.1 port 40462 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:33.725539 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:33.729355 systemd-logind[1553]: New session 26 of user core. Jan 13 20:18:33.745174 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:18:33.874372 kubelet[2804]: E0113 20:18:33.874252 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:33.875041 containerd[1570]: time="2025-01-13T20:18:33.874880361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29q52,Uid:0858afd8-c1b1-4d47-b60a-6476f6a47d3f,Namespace:kube-system,Attempt:0,}" Jan 13 20:18:33.893056 containerd[1570]: time="2025-01-13T20:18:33.892868571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:18:33.893056 containerd[1570]: time="2025-01-13T20:18:33.892912888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:18:33.893056 containerd[1570]: time="2025-01-13T20:18:33.892923128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:18:33.893056 containerd[1570]: time="2025-01-13T20:18:33.893002322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:18:33.926421 containerd[1570]: time="2025-01-13T20:18:33.926371836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29q52,Uid:0858afd8-c1b1-4d47-b60a-6476f6a47d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\"" Jan 13 20:18:33.927172 kubelet[2804]: E0113 20:18:33.927154 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:33.930243 containerd[1570]: time="2025-01-13T20:18:33.930209982Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:18:33.942168 containerd[1570]: time="2025-01-13T20:18:33.942116994Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"562f5771ba030f180b92e158a0c33a734727cd33c51cc0bfa91b05b4d9f32fea\"" Jan 13 20:18:33.943534 containerd[1570]: time="2025-01-13T20:18:33.942726514Z" level=info msg="StartContainer for \"562f5771ba030f180b92e158a0c33a734727cd33c51cc0bfa91b05b4d9f32fea\"" Jan 13 20:18:33.996850 containerd[1570]: time="2025-01-13T20:18:33.993957846Z" level=info msg="StartContainer for \"562f5771ba030f180b92e158a0c33a734727cd33c51cc0bfa91b05b4d9f32fea\" returns successfully" Jan 13 20:18:34.049308 containerd[1570]: time="2025-01-13T20:18:34.049249773Z" level=info msg="shim disconnected" id=562f5771ba030f180b92e158a0c33a734727cd33c51cc0bfa91b05b4d9f32fea namespace=k8s.io Jan 13 20:18:34.049308 containerd[1570]: time="2025-01-13T20:18:34.049303210Z" level=warning msg="cleaning up after shim disconnected" id=562f5771ba030f180b92e158a0c33a734727cd33c51cc0bfa91b05b4d9f32fea namespace=k8s.io Jan 13 20:18:34.049308 containerd[1570]: time="2025-01-13T20:18:34.049311850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:34.253592 kubelet[2804]: E0113 20:18:34.253567 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:34.257185 containerd[1570]: time="2025-01-13T20:18:34.257126525Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:18:34.267723 containerd[1570]: time="2025-01-13T20:18:34.267623641Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d9c8d767d99e3dfcc2f2d66b553dccf24e511c6e745f0e6a4aa81be9b5ec4e32\"" Jan 13 20:18:34.269823 containerd[1570]: time="2025-01-13T20:18:34.269798947Z" level=info msg="StartContainer for \"d9c8d767d99e3dfcc2f2d66b553dccf24e511c6e745f0e6a4aa81be9b5ec4e32\"" Jan 13 20:18:34.324060 containerd[1570]: time="2025-01-13T20:18:34.323950061Z" level=info msg="StartContainer for \"d9c8d767d99e3dfcc2f2d66b553dccf24e511c6e745f0e6a4aa81be9b5ec4e32\" returns successfully" Jan 13 20:18:34.348558 containerd[1570]: time="2025-01-13T20:18:34.348498753Z" level=info msg="shim disconnected" id=d9c8d767d99e3dfcc2f2d66b553dccf24e511c6e745f0e6a4aa81be9b5ec4e32 namespace=k8s.io Jan 13 20:18:34.348558 containerd[1570]: time="2025-01-13T20:18:34.348554550Z" level=warning msg="cleaning up after shim disconnected" id=d9c8d767d99e3dfcc2f2d66b553dccf24e511c6e745f0e6a4aa81be9b5ec4e32 namespace=k8s.io Jan 13 20:18:34.348558 containerd[1570]: time="2025-01-13T20:18:34.348563149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:35.258178 kubelet[2804]: E0113 20:18:35.257042 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:35.260119 containerd[1570]: time="2025-01-13T20:18:35.259865675Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:18:35.273509 containerd[1570]: time="2025-01-13T20:18:35.272880935Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b64525b7ddf9203fca4667feae3c70f4c134b4d72722bfd50d72f5edf8e2125\"" Jan 13 20:18:35.273784 containerd[1570]: time="2025-01-13T20:18:35.273723687Z" level=info msg="StartContainer for \"2b64525b7ddf9203fca4667feae3c70f4c134b4d72722bfd50d72f5edf8e2125\"" Jan 13 20:18:35.329093 containerd[1570]: time="2025-01-13T20:18:35.329052821Z" level=info msg="StartContainer for \"2b64525b7ddf9203fca4667feae3c70f4c134b4d72722bfd50d72f5edf8e2125\" returns successfully" Jan 13 20:18:35.344669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b64525b7ddf9203fca4667feae3c70f4c134b4d72722bfd50d72f5edf8e2125-rootfs.mount: Deactivated successfully. Jan 13 20:18:35.348022 containerd[1570]: time="2025-01-13T20:18:35.347910349Z" level=info msg="shim disconnected" id=2b64525b7ddf9203fca4667feae3c70f4c134b4d72722bfd50d72f5edf8e2125 namespace=k8s.io Jan 13 20:18:35.348022 containerd[1570]: time="2025-01-13T20:18:35.347987105Z" level=warning msg="cleaning up after shim disconnected" id=2b64525b7ddf9203fca4667feae3c70f4c134b4d72722bfd50d72f5edf8e2125 namespace=k8s.io Jan 13 20:18:35.348022 containerd[1570]: time="2025-01-13T20:18:35.347996024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:36.261955 kubelet[2804]: E0113 20:18:36.260587 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:36.263689 containerd[1570]: time="2025-01-13T20:18:36.263648437Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:18:36.293667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823898548.mount: Deactivated successfully. Jan 13 20:18:36.298820 containerd[1570]: time="2025-01-13T20:18:36.296477395Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11f3316e7d3c27256a956511a2231e08cb784cfd82001fe15395fb8ca47471f6\"" Jan 13 20:18:36.299607 containerd[1570]: time="2025-01-13T20:18:36.299566433Z" level=info msg="StartContainer for \"11f3316e7d3c27256a956511a2231e08cb784cfd82001fe15395fb8ca47471f6\"" Jan 13 20:18:36.365568 containerd[1570]: time="2025-01-13T20:18:36.365335384Z" level=info msg="StartContainer for \"11f3316e7d3c27256a956511a2231e08cb784cfd82001fe15395fb8ca47471f6\" returns successfully" Jan 13 20:18:36.387010 containerd[1570]: time="2025-01-13T20:18:36.386897173Z" level=info msg="shim disconnected" id=11f3316e7d3c27256a956511a2231e08cb784cfd82001fe15395fb8ca47471f6 namespace=k8s.io Jan 13 20:18:36.387010 containerd[1570]: time="2025-01-13T20:18:36.386957010Z" level=warning msg="cleaning up after shim disconnected" id=11f3316e7d3c27256a956511a2231e08cb784cfd82001fe15395fb8ca47471f6 namespace=k8s.io Jan 13 20:18:36.387010 containerd[1570]: time="2025-01-13T20:18:36.386978128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:18:37.110215 kubelet[2804]: E0113 20:18:37.110166 2804 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:18:37.263988 kubelet[2804]: E0113 20:18:37.263950 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:37.267219 containerd[1570]: time="2025-01-13T20:18:37.267174306Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:18:37.277400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11f3316e7d3c27256a956511a2231e08cb784cfd82001fe15395fb8ca47471f6-rootfs.mount: Deactivated successfully. Jan 13 20:18:37.281620 containerd[1570]: time="2025-01-13T20:18:37.281574412Z" level=info msg="CreateContainer within sandbox \"30dae0e5b5c1e2836524f53e30222fee9c07fb795d3c896977ecc114e4c9ea7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9b5f8bfb9fb951744ef661c270741aa56bc13c612d260f5873c2f705e9b0638\"" Jan 13 20:18:37.282456 containerd[1570]: time="2025-01-13T20:18:37.282418452Z" level=info msg="StartContainer for \"f9b5f8bfb9fb951744ef661c270741aa56bc13c612d260f5873c2f705e9b0638\"" Jan 13 20:18:37.338316 containerd[1570]: time="2025-01-13T20:18:37.338276241Z" level=info msg="StartContainer for \"f9b5f8bfb9fb951744ef661c270741aa56bc13c612d260f5873c2f705e9b0638\" returns successfully" Jan 13 20:18:37.599808 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:18:38.268867 kubelet[2804]: E0113 20:18:38.268843 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:38.749590 kubelet[2804]: I0113 20:18:38.749549 2804 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:18:38Z","lastTransitionTime":"2025-01-13T20:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:18:39.028873 kubelet[2804]: E0113 20:18:39.028323 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:39.877200 kubelet[2804]: E0113 20:18:39.877125 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:40.118573 kubelet[2804]: E0113 20:18:40.118469 2804 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36856->127.0.0.1:36689: write tcp 127.0.0.1:36856->127.0.0.1:36689: write: broken pipe Jan 13 20:18:40.398476 systemd-networkd[1235]: lxc_health: Link UP Jan 13 20:18:40.409123 systemd-networkd[1235]: lxc_health: Gained carrier Jan 13 20:18:41.028231 kubelet[2804]: E0113 20:18:41.027846 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:41.803924 systemd-networkd[1235]: lxc_health: Gained IPv6LL Jan 13 20:18:41.877649 kubelet[2804]: E0113 20:18:41.877319 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:41.904710 kubelet[2804]: I0113 20:18:41.904570 2804 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-29q52" podStartSLOduration=8.904535628 podStartE2EDuration="8.904535628s" podCreationTimestamp="2025-01-13 20:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:18:38.284064978 +0000 UTC m=+81.346769232" watchObservedRunningTime="2025-01-13 20:18:41.904535628 +0000 UTC m=+84.967239882" Jan 13 20:18:42.279146 kubelet[2804]: E0113 20:18:42.278932 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:43.280933 kubelet[2804]: E0113 20:18:43.280894 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:48.027389 kubelet[2804]: E0113 20:18:48.027315 2804 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:52.792504 sshd[4643]: Connection closed by 10.0.0.1 port 40462 Jan 13 20:18:52.792922 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:52.796533 systemd[1]: sshd@25-10.0.0.83:22-10.0.0.1:40462.service: Deactivated successfully. Jan 13 20:18:52.798884 systemd-logind[1553]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:18:52.799247 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:18:52.800118 systemd-logind[1553]: Removed session 26.