Jan 13 21:32:13.873295 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:32:13.873315 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:32:13.873324 kernel: KASLR enabled Jan 13 21:32:13.873330 kernel: efi: EFI v2.7 by EDK II Jan 13 21:32:13.873336 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:32:13.873341 kernel: random: crng init done Jan 13 21:32:13.873348 kernel: ACPI: Early table checksum verification disabled Jan 13 21:32:13.873354 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:32:13.873360 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:32:13.873367 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873374 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873379 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873385 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873391 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873399 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873406 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873412 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873419 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:32:13.873425 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:32:13.873431 kernel: NUMA: Failed to initialise from firmware Jan 13 21:32:13.873438 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:32:13.873444 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 21:32:13.873450 kernel: Zone ranges: Jan 13 21:32:13.873456 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:32:13.873462 kernel: DMA32 empty Jan 13 21:32:13.873470 kernel: Normal empty Jan 13 21:32:13.873476 kernel: Movable zone start for each node Jan 13 21:32:13.873482 kernel: Early memory node ranges Jan 13 21:32:13.873488 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:32:13.873494 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:32:13.873501 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:32:13.873507 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:32:13.873513 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:32:13.873520 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:32:13.873526 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:32:13.873532 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:32:13.873538 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:32:13.873546 kernel: psci: probing for conduit method from ACPI. Jan 13 21:32:13.873552 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:32:13.873559 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:32:13.873567 kernel: psci: Trusted OS migration not required Jan 13 21:32:13.873574 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:32:13.873581 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:32:13.873589 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:32:13.873595 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:32:13.873602 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:32:13.873609 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:32:13.873616 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:32:13.873622 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:32:13.873629 kernel: CPU features: detected: Spectre-v4 Jan 13 21:32:13.873636 kernel: CPU features: detected: Spectre-BHB Jan 13 21:32:13.873642 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:32:13.873649 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:32:13.873657 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:32:13.873664 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:32:13.873671 kernel: alternatives: applying boot alternatives Jan 13 21:32:13.873678 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:32:13.873685 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:32:13.873692 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:32:13.873699 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:32:13.873706 kernel: Fallback order for Node 0: 0 Jan 13 21:32:13.873712 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:32:13.873719 kernel: Policy zone: DMA Jan 13 21:32:13.873726 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:32:13.873733 kernel: software IO TLB: area num 4. Jan 13 21:32:13.873740 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:32:13.873748 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 13 21:32:13.873755 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:32:13.873761 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:32:13.873768 kernel: rcu: RCU event tracing is enabled. Jan 13 21:32:13.873775 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:32:13.873782 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:32:13.873789 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:32:13.873795 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:32:13.873802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:32:13.873809 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:32:13.873817 kernel: GICv3: 256 SPIs implemented Jan 13 21:32:13.873823 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:32:13.873830 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:32:13.873836 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:32:13.873843 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:32:13.873850 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:32:13.873856 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:32:13.873863 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:32:13.873870 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:32:13.873876 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:32:13.873883 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:32:13.873891 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:32:13.873898 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:32:13.873905 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:32:13.873911 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:32:13.873918 kernel: arm-pv: using stolen time PV Jan 13 21:32:13.873925 kernel: Console: colour dummy device 80x25 Jan 13 21:32:13.873932 kernel: ACPI: Core revision 20230628 Jan 13 21:32:13.873939 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:32:13.873946 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:32:13.873953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:32:13.873960 kernel: landlock: Up and running. Jan 13 21:32:13.873967 kernel: SELinux: Initializing. Jan 13 21:32:13.873974 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:32:13.873981 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:32:13.873988 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:32:13.873995 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:32:13.874001 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:32:13.874008 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:32:13.874022 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:32:13.874031 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:32:13.874038 kernel: Remapping and enabling EFI services. Jan 13 21:32:13.874045 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:32:13.874052 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:32:13.874059 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:32:13.874065 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:32:13.874072 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:32:13.874079 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:32:13.874086 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:32:13.874093 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:32:13.874101 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:32:13.874108 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:32:13.874119 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:32:13.874127 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:32:13.874134 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:32:13.874141 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:32:13.874149 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:32:13.874156 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:32:13.874163 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:32:13.874171 kernel: SMP: Total of 4 processors activated. Jan 13 21:32:13.874178 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:32:13.874186 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:32:13.874193 kernel: CPU features: detected: Common not Private translations Jan 13 21:32:13.874200 kernel: CPU features: detected: CRC32 instructions Jan 13 21:32:13.874207 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:32:13.874214 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:32:13.874222 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:32:13.874230 kernel: CPU features: detected: Privileged Access Never Jan 13 21:32:13.874237 kernel: CPU features: detected: RAS Extension Support Jan 13 21:32:13.874244 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:32:13.874251 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:32:13.874259 kernel: alternatives: applying system-wide alternatives Jan 13 21:32:13.874266 kernel: devtmpfs: initialized Jan 13 21:32:13.874291 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:32:13.874300 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:32:13.874307 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:32:13.874317 kernel: SMBIOS 3.0.0 present. Jan 13 21:32:13.874324 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:32:13.874331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:32:13.874339 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:32:13.874346 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:32:13.874353 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:32:13.874361 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:32:13.874368 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 13 21:32:13.874375 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:32:13.874384 kernel: cpuidle: using governor menu Jan 13 21:32:13.874391 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:32:13.874398 kernel: ASID allocator initialised with 32768 entries Jan 13 21:32:13.874405 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:32:13.874412 kernel: Serial: AMBA PL011 UART driver Jan 13 21:32:13.874420 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:32:13.874427 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:32:13.874434 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:32:13.874441 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:32:13.874449 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:32:13.874457 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:32:13.874464 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:32:13.874471 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:32:13.874478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:32:13.874485 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:32:13.874493 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:32:13.874500 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:32:13.874507 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:32:13.874515 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:32:13.874522 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:32:13.874529 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:32:13.874536 kernel: ACPI: Interpreter enabled Jan 13 21:32:13.874543 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:32:13.874550 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:32:13.874558 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:32:13.874565 kernel: printk: console [ttyAMA0] enabled Jan 13 21:32:13.874572 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:32:13.874697 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:32:13.874768 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:32:13.874832 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:32:13.874893 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:32:13.874954 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:32:13.874964 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:32:13.874971 kernel: PCI host bridge to bus 0000:00 Jan 13 21:32:13.875047 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:32:13.875105 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:32:13.875160 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:32:13.875215 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:32:13.875301 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:32:13.875376 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:32:13.875474 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:32:13.875542 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:32:13.875606 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:32:13.875670 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:32:13.875733 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:32:13.875797 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:32:13.875860 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:32:13.875919 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:32:13.875974 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:32:13.875984 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:32:13.875991 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:32:13.875999 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:32:13.876006 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:32:13.876019 kernel: iommu: Default domain type: Translated Jan 13 21:32:13.876027 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:32:13.876034 kernel: efivars: Registered efivars operations Jan 13 21:32:13.876043 kernel: vgaarb: loaded Jan 13 21:32:13.876050 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:32:13.876058 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:32:13.876065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:32:13.876072 kernel: pnp: PnP ACPI init Jan 13 21:32:13.876146 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:32:13.876156 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:32:13.876164 kernel: NET: Registered PF_INET protocol family Jan 13 21:32:13.876173 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:32:13.876181 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:32:13.876188 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:32:13.876195 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:32:13.876203 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:32:13.876210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:32:13.876217 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:32:13.876225 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:32:13.876232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:32:13.876241 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:32:13.876249 kernel: kvm [1]: HYP mode not available Jan 13 21:32:13.876256 kernel: Initialise system trusted keyrings Jan 13 21:32:13.876263 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:32:13.876270 kernel: Key type asymmetric registered Jan 13 21:32:13.876295 kernel: Asymmetric key parser 'x509' registered Jan 13 21:32:13.876302 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:32:13.876310 kernel: io scheduler mq-deadline registered Jan 13 21:32:13.876317 kernel: io scheduler kyber registered Jan 13 21:32:13.876326 kernel: io scheduler bfq registered Jan 13 21:32:13.876333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:32:13.876341 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:32:13.876348 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:32:13.876416 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:32:13.876426 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:32:13.876433 kernel: thunder_xcv, ver 1.0 Jan 13 21:32:13.876440 kernel: thunder_bgx, ver 1.0 Jan 13 21:32:13.876447 kernel: nicpf, ver 1.0 Jan 13 21:32:13.876456 kernel: nicvf, ver 1.0 Jan 13 21:32:13.876531 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:32:13.876591 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:32:13 UTC (1736803933) Jan 13 21:32:13.876601 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:32:13.876608 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:32:13.876615 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:32:13.876623 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:32:13.876630 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:32:13.876638 kernel: Segment Routing with IPv6 Jan 13 21:32:13.876645 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:32:13.876653 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:32:13.876660 kernel: Key type dns_resolver registered Jan 13 21:32:13.876667 kernel: registered taskstats version 1 Jan 13 21:32:13.876674 kernel: Loading compiled-in X.509 certificates Jan 13 21:32:13.876681 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:32:13.876688 kernel: Key type .fscrypt registered Jan 13 21:32:13.876696 kernel: Key type fscrypt-provisioning registered Jan 13 21:32:13.876704 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:32:13.876711 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:32:13.876718 kernel: ima: No architecture policies found Jan 13 21:32:13.876726 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:32:13.876733 kernel: clk: Disabling unused clocks Jan 13 21:32:13.876740 kernel: Freeing unused kernel memory: 39360K Jan 13 21:32:13.876747 kernel: Run /init as init process Jan 13 21:32:13.876754 kernel: with arguments: Jan 13 21:32:13.876761 kernel: /init Jan 13 21:32:13.876769 kernel: with environment: Jan 13 21:32:13.876776 kernel: HOME=/ Jan 13 21:32:13.876783 kernel: TERM=linux Jan 13 21:32:13.876790 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:32:13.876799 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:32:13.876808 systemd[1]: Detected virtualization kvm. Jan 13 21:32:13.876816 systemd[1]: Detected architecture arm64. Jan 13 21:32:13.876823 systemd[1]: Running in initrd. Jan 13 21:32:13.876832 systemd[1]: No hostname configured, using default hostname. Jan 13 21:32:13.876839 systemd[1]: Hostname set to . Jan 13 21:32:13.876847 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:32:13.876855 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:32:13.876863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:13.876871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:13.876879 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:32:13.876887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:32:13.876896 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:32:13.876904 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:32:13.876913 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:32:13.876921 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:32:13.876929 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:13.876937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:13.876946 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:32:13.876954 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:32:13.876961 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:32:13.876969 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:32:13.876977 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:32:13.876984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:32:13.876992 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:32:13.877000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:32:13.877007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:13.877023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:13.877031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:13.877039 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:32:13.877046 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:32:13.877054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:32:13.877062 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:32:13.877070 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:32:13.877077 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:32:13.877085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:32:13.877095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:13.877103 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:32:13.877111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:13.877119 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:32:13.877141 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:32:13.877151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:13.877159 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:32:13.877183 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 21:32:13.877203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:13.877212 systemd-journald[237]: Journal started Jan 13 21:32:13.877232 systemd-journald[237]: Runtime Journal (/run/log/journal/fc6c38a367604a198635f2c87fc2376d) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:32:13.869057 systemd-modules-load[238]: Inserted module 'overlay' Jan 13 21:32:13.880287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:32:13.880320 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:32:13.881403 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:32:13.885300 kernel: Bridge firewalling registered Jan 13 21:32:13.884420 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 13 21:32:13.884627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:32:13.886344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:13.889768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:32:13.890708 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:13.892172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:13.894678 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:32:13.895832 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:13.900769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:13.902692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:32:13.907557 dracut-cmdline[273]: dracut-dracut-053 Jan 13 21:32:13.909901 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:32:13.933372 systemd-resolved[280]: Positive Trust Anchors: Jan 13 21:32:13.933389 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:32:13.933420 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:32:13.937976 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 13 21:32:13.940579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:32:13.941380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:13.976298 kernel: SCSI subsystem initialized Jan 13 21:32:13.983293 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:32:13.991295 kernel: iscsi: registered transport (tcp) Jan 13 21:32:14.002317 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:32:14.002351 kernel: QLogic iSCSI HBA Driver Jan 13 21:32:14.041433 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:32:14.052388 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:32:14.066699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:32:14.067641 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:32:14.067652 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:32:14.113301 kernel: raid6: neonx8 gen() 15744 MB/s Jan 13 21:32:14.130301 kernel: raid6: neonx4 gen() 15643 MB/s Jan 13 21:32:14.147299 kernel: raid6: neonx2 gen() 13259 MB/s Jan 13 21:32:14.164293 kernel: raid6: neonx1 gen() 10473 MB/s Jan 13 21:32:14.181287 kernel: raid6: int64x8 gen() 6968 MB/s Jan 13 21:32:14.198295 kernel: raid6: int64x4 gen() 7318 MB/s Jan 13 21:32:14.215295 kernel: raid6: int64x2 gen() 6127 MB/s Jan 13 21:32:14.232294 kernel: raid6: int64x1 gen() 5058 MB/s Jan 13 21:32:14.232318 kernel: raid6: using algorithm neonx8 gen() 15744 MB/s Jan 13 21:32:14.249301 kernel: raid6: .... xor() 11894 MB/s, rmw enabled Jan 13 21:32:14.249325 kernel: raid6: using neon recovery algorithm Jan 13 21:32:14.254530 kernel: xor: measuring software checksum speed Jan 13 21:32:14.254544 kernel: 8regs : 19793 MB/sec Jan 13 21:32:14.254553 kernel: 32regs : 19101 MB/sec Jan 13 21:32:14.255473 kernel: arm64_neon : 25712 MB/sec Jan 13 21:32:14.255501 kernel: xor: using function: arm64_neon (25712 MB/sec) Jan 13 21:32:14.305630 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:32:14.316247 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:32:14.328491 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:14.338801 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 13 21:32:14.341856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:14.354410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:32:14.365451 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 13 21:32:14.389773 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:32:14.398422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:32:14.435568 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:14.442427 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:32:14.454797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:32:14.455980 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:32:14.457704 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:14.459360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:32:14.469457 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:32:14.477720 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:32:14.483218 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:32:14.483329 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:32:14.483345 kernel: GPT:9289727 != 19775487 Jan 13 21:32:14.483355 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:32:14.483364 kernel: GPT:9289727 != 19775487 Jan 13 21:32:14.483375 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:32:14.483384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:32:14.477699 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:32:14.491022 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:32:14.491143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:14.493222 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:32:14.496348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:32:14.501032 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (516) Jan 13 21:32:14.501052 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Jan 13 21:32:14.496487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:14.500429 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:14.506522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:14.516095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:14.520975 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:32:14.525668 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:32:14.530090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:32:14.533769 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:32:14.534638 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:32:14.547428 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:32:14.548827 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:32:14.553809 disk-uuid[549]: Primary Header is updated. Jan 13 21:32:14.553809 disk-uuid[549]: Secondary Entries is updated. Jan 13 21:32:14.553809 disk-uuid[549]: Secondary Header is updated. Jan 13 21:32:14.559576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:32:14.571663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:15.571225 disk-uuid[550]: The operation has completed successfully. Jan 13 21:32:15.572118 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:32:15.592416 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:32:15.592509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:32:15.613425 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:32:15.616820 sh[573]: Success Jan 13 21:32:15.629297 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:32:15.655049 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:32:15.665388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:32:15.666741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:32:15.675950 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:32:15.676009 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:32:15.676031 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:32:15.677364 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:32:15.677379 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:32:15.681382 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:32:15.682146 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:32:15.691400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:32:15.692614 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:32:15.700271 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:32:15.700319 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:32:15.700330 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:32:15.702298 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:32:15.710185 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:32:15.711420 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:32:15.716675 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:32:15.723428 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:32:15.776537 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:32:15.785415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:32:15.807357 systemd-networkd[765]: lo: Link UP Jan 13 21:32:15.807365 systemd-networkd[765]: lo: Gained carrier Jan 13 21:32:15.807986 systemd-networkd[765]: Enumeration completed Jan 13 21:32:15.808552 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:15.808555 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:32:15.809244 systemd-networkd[765]: eth0: Link UP Jan 13 21:32:15.809247 systemd-networkd[765]: eth0: Gained carrier Jan 13 21:32:15.809253 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:15.810874 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:32:15.811734 systemd[1]: Reached target network.target - Network. Jan 13 21:32:15.823108 ignition[668]: Ignition 2.19.0 Jan 13 21:32:15.823717 ignition[668]: Stage: fetch-offline Jan 13 21:32:15.823755 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:15.823763 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:32:15.823919 ignition[668]: parsed url from cmdline: "" Jan 13 21:32:15.826306 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:32:15.823922 ignition[668]: no config URL provided Jan 13 21:32:15.823927 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:32:15.823934 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:32:15.823955 ignition[668]: op(1): [started] loading QEMU firmware config module Jan 13 21:32:15.823960 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:32:15.831029 ignition[668]: op(1): [finished] loading QEMU firmware config module Jan 13 21:32:15.868347 ignition[668]: parsing config with SHA512: c9db91e2db7cd849b0cd5d0d0f5a9d0126a25668e72a4dd506ea2f1e2661d580cd8dd764d0304987db92e8fdff2bc7b2271dd6d40d0363aa999c38b3da805ee5 Jan 13 21:32:15.872413 unknown[668]: fetched base config from "system" Jan 13 21:32:15.872422 unknown[668]: fetched user config from "qemu" Jan 13 21:32:15.872836 ignition[668]: fetch-offline: fetch-offline passed Jan 13 21:32:15.872893 ignition[668]: Ignition finished successfully Jan 13 21:32:15.874598 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:32:15.876132 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:32:15.882418 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:32:15.892371 ignition[772]: Ignition 2.19.0 Jan 13 21:32:15.892380 ignition[772]: Stage: kargs Jan 13 21:32:15.892522 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:15.892531 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:32:15.893426 ignition[772]: kargs: kargs passed Jan 13 21:32:15.893467 ignition[772]: Ignition finished successfully Jan 13 21:32:15.897029 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:32:15.905407 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:32:15.914171 ignition[780]: Ignition 2.19.0 Jan 13 21:32:15.914185 ignition[780]: Stage: disks Jan 13 21:32:15.914358 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:15.914368 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:32:15.915236 ignition[780]: disks: disks passed Jan 13 21:32:15.915298 ignition[780]: Ignition finished successfully Jan 13 21:32:15.918327 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:32:15.919883 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:32:15.920703 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:32:15.922150 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:32:15.923684 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:32:15.924974 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:32:15.936454 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:32:15.946293 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:32:15.950233 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:32:15.952229 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:32:15.998292 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:32:15.998499 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:32:15.999403 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:32:16.006351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:32:16.007665 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:32:16.008867 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:32:16.012314 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 13 21:32:16.008902 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:32:16.008922 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:32:16.016609 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:32:16.016624 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:32:16.016634 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:32:16.012718 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:32:16.018700 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:32:16.021301 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:32:16.022103 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:32:16.058479 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:32:16.061378 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:32:16.064937 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:32:16.068398 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:32:16.132925 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:32:16.151349 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:32:16.152588 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:32:16.157290 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:32:16.169729 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:32:16.172087 ignition[913]: INFO : Ignition 2.19.0 Jan 13 21:32:16.172087 ignition[913]: INFO : Stage: mount Jan 13 21:32:16.173256 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:16.173256 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:32:16.173256 ignition[913]: INFO : mount: mount passed Jan 13 21:32:16.173256 ignition[913]: INFO : Ignition finished successfully Jan 13 21:32:16.175857 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:32:16.186348 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:32:16.675550 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:32:16.684475 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:32:16.689931 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jan 13 21:32:16.689971 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:32:16.690000 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:32:16.691295 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:32:16.693302 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:32:16.694076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:32:16.709128 ignition[944]: INFO : Ignition 2.19.0 Jan 13 21:32:16.709128 ignition[944]: INFO : Stage: files Jan 13 21:32:16.710261 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:16.710261 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:32:16.710261 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:32:16.712951 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:32:16.712951 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:32:16.712951 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:32:16.712951 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:32:16.717041 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:32:16.717041 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:32:16.717041 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:32:16.717041 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:32:16.717041 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:32:16.713030 unknown[944]: wrote ssh authorized keys file for user: core Jan 13 21:32:16.762183 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:32:16.932536 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:32:16.932536 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:32:16.935417 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 21:32:17.231101 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 21:32:17.296239 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:32:17.297613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 21:32:17.532760 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 21:32:17.724412 systemd-networkd[765]: eth0: Gained IPv6LL Jan 13 21:32:17.770723 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 13 21:32:17.772546 ignition[944]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:32:17.794528 ignition[944]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:32:17.798313 ignition[944]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:32:17.798313 ignition[944]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:32:17.798313 ignition[944]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:32:17.798313 ignition[944]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:32:17.798313 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:32:17.798313 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:32:17.798313 ignition[944]: INFO : files: files passed Jan 13 21:32:17.798313 ignition[944]: INFO : Ignition finished successfully Jan 13 21:32:17.802561 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:32:17.812434 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:32:17.814399 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:32:17.816394 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:32:17.816477 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:32:17.821642 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:32:17.823657 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:17.823657 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:17.825853 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:17.825335 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:32:17.827000 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:32:17.828978 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:32:17.848213 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:32:17.849331 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:32:17.851144 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:32:17.852245 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:32:17.853640 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:32:17.854314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:32:17.868036 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:32:17.874419 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:32:17.882590 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:17.884202 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:17.885198 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:32:17.885935 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:32:17.886069 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:32:17.887958 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:32:17.889240 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:32:17.890566 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:32:17.891883 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:32:17.893157 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:32:17.894551 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:32:17.895906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:32:17.897364 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:32:17.898670 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:32:17.900061 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:32:17.901195 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:32:17.901325 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:32:17.903011 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:17.904330 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:17.905698 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:32:17.906329 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:17.907951 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:32:17.908074 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:32:17.910242 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:32:17.910373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:32:17.911759 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:32:17.912870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:32:17.914343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:17.915691 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:32:17.916940 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:32:17.918593 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:32:17.918680 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:32:17.919862 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:32:17.919943 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:32:17.921054 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:32:17.921160 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:32:17.922326 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:32:17.922430 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:32:17.932457 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:32:17.933752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:32:17.934374 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:32:17.934485 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:17.935912 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:32:17.936013 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:32:17.940714 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:32:17.940809 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:32:17.944211 ignition[998]: INFO : Ignition 2.19.0 Jan 13 21:32:17.944211 ignition[998]: INFO : Stage: umount Jan 13 21:32:17.945492 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:17.945492 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:32:17.945492 ignition[998]: INFO : umount: umount passed Jan 13 21:32:17.945492 ignition[998]: INFO : Ignition finished successfully Jan 13 21:32:17.946713 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:32:17.948306 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:32:17.949922 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:32:17.950327 systemd[1]: Stopped target network.target - Network. Jan 13 21:32:17.951473 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:32:17.951525 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:32:17.952752 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:32:17.952793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:32:17.953980 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:32:17.954019 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:32:17.955123 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:32:17.955161 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:32:17.956593 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:32:17.957801 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:32:17.964977 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:32:17.965087 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:32:17.967529 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:32:17.967571 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:17.973333 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 13 21:32:17.974683 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:32:17.974789 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:32:17.976458 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:32:17.976488 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:17.986376 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:32:17.987016 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:32:17.987062 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:32:17.988517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:32:17.988553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:17.989857 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:32:17.989894 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:17.991415 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:17.999168 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:32:17.999983 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:32:18.005038 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:32:18.005221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:18.006819 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:32:18.006855 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:18.008251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:32:18.008311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:18.009599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:32:18.009647 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:32:18.011717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:32:18.011761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:32:18.013738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:32:18.013785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:18.019422 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:32:18.020156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:32:18.020202 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:18.021764 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:32:18.021801 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:18.023162 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:32:18.023196 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:18.024781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:32:18.024816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:18.026438 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:32:18.026511 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:32:18.028601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:32:18.029317 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:32:18.031076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:32:18.031924 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:32:18.031986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:32:18.037376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:32:18.043331 systemd[1]: Switching root. Jan 13 21:32:18.076723 systemd-journald[237]: Journal stopped Jan 13 21:32:18.776449 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 21:32:18.776500 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:32:18.776512 kernel: SELinux: policy capability open_perms=1 Jan 13 21:32:18.776522 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:32:18.776531 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:32:18.776541 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:32:18.776554 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:32:18.776563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:32:18.776572 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:32:18.776582 kernel: audit: type=1403 audit(1736803938.263:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:32:18.776593 systemd[1]: Successfully loaded SELinux policy in 32.032ms. Jan 13 21:32:18.776612 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.784ms. Jan 13 21:32:18.776627 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:32:18.776638 systemd[1]: Detected virtualization kvm. Jan 13 21:32:18.776648 systemd[1]: Detected architecture arm64. Jan 13 21:32:18.776660 systemd[1]: Detected first boot. Jan 13 21:32:18.776670 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:32:18.776680 zram_generator::config[1060]: No configuration found. Jan 13 21:32:18.776691 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:32:18.776703 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:32:18.776714 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:32:18.776724 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:32:18.776735 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:32:18.776751 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:32:18.776762 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:32:18.776773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:32:18.776783 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:32:18.776794 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:32:18.776804 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:32:18.776814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:18.776825 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:18.776835 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:32:18.776847 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:32:18.776858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:32:18.776868 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:32:18.776879 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:32:18.776889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:18.776900 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:32:18.776910 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:18.776920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:32:18.776933 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:32:18.776944 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:32:18.776954 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:32:18.776973 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:32:18.776986 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:32:18.776997 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:32:18.777007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:18.777017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:18.777028 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:18.777041 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:32:18.777051 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:32:18.777062 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:32:18.777072 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:32:18.777082 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:32:18.777093 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:32:18.777104 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:32:18.777114 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:32:18.777126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:18.777136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:32:18.777147 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:32:18.777157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:18.777169 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:32:18.777179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:18.777189 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:32:18.777200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:18.777210 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:32:18.777223 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:32:18.777234 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:32:18.777243 kernel: fuse: init (API version 7.39) Jan 13 21:32:18.777253 kernel: loop: module loaded Jan 13 21:32:18.777262 kernel: ACPI: bus type drm_connector registered Jan 13 21:32:18.777278 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:32:18.777292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:32:18.777302 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:32:18.777312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:32:18.777325 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:32:18.777335 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:32:18.777364 systemd-journald[1142]: Collecting audit messages is disabled. Jan 13 21:32:18.777385 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:32:18.777395 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:32:18.777406 systemd-journald[1142]: Journal started Jan 13 21:32:18.777428 systemd-journald[1142]: Runtime Journal (/run/log/journal/fc6c38a367604a198635f2c87fc2376d) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:32:18.779878 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:32:18.780814 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:32:18.781759 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:32:18.782671 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:32:18.783638 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:32:18.784812 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:18.786001 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:32:18.786154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:32:18.787266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:18.787432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:18.788474 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:32:18.788618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:32:18.789753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:18.789913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:18.791047 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:32:18.791202 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:32:18.792258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:18.792483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:18.793792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:18.794921 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:32:18.796141 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:32:18.806779 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:32:18.817367 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:32:18.819069 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:32:18.819928 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:32:18.822361 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:32:18.826540 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:32:18.827355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:32:18.829047 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:32:18.830086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:32:18.833374 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:32:18.834185 systemd-journald[1142]: Time spent on flushing to /var/log/journal/fc6c38a367604a198635f2c87fc2376d is 11.625ms for 848 entries. Jan 13 21:32:18.834185 systemd-journald[1142]: System Journal (/var/log/journal/fc6c38a367604a198635f2c87fc2376d) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:32:18.858937 systemd-journald[1142]: Received client request to flush runtime journal. Jan 13 21:32:18.835232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:32:18.837605 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:18.838793 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:32:18.839810 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:32:18.854531 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:32:18.855653 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:32:18.863605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:18.865060 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:32:18.866560 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 13 21:32:18.866577 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 13 21:32:18.870335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:32:18.871725 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:18.873937 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:32:18.880450 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:32:18.898437 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:32:18.909547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:32:18.919879 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 13 21:32:18.919899 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 13 21:32:18.923415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:19.229323 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:32:19.245432 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:19.263427 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Jan 13 21:32:19.279862 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:19.290411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:32:19.300418 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:32:19.317814 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 13 21:32:19.323304 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1238) Jan 13 21:32:19.339880 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:32:19.353489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:32:19.399426 systemd-networkd[1229]: lo: Link UP Jan 13 21:32:19.399432 systemd-networkd[1229]: lo: Gained carrier Jan 13 21:32:19.400039 systemd-networkd[1229]: Enumeration completed Jan 13 21:32:19.400135 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:32:19.400681 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:19.400688 systemd-networkd[1229]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:32:19.403432 systemd-networkd[1229]: eth0: Link UP Jan 13 21:32:19.403439 systemd-networkd[1229]: eth0: Gained carrier Jan 13 21:32:19.403452 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:19.408487 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:32:19.412131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:19.421061 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:32:19.423334 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:32:19.423341 systemd-networkd[1229]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:32:19.437571 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:32:19.448898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:19.468608 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:32:19.469781 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:19.484415 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:32:19.487532 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:32:19.517659 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:32:19.518740 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:32:19.519710 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:32:19.519739 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:32:19.520487 systemd[1]: Reached target machines.target - Containers. Jan 13 21:32:19.522137 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:32:19.537427 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:32:19.539475 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:32:19.540320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:19.541182 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:32:19.543066 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:32:19.546517 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:32:19.548120 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:32:19.558406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:32:19.559992 kernel: loop0: detected capacity change from 0 to 194512 Jan 13 21:32:19.563717 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:32:19.565167 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:32:19.570298 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:32:19.603345 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:32:19.645305 kernel: loop2: detected capacity change from 0 to 114328 Jan 13 21:32:19.694304 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 21:32:19.700297 kernel: loop4: detected capacity change from 0 to 114432 Jan 13 21:32:19.704301 kernel: loop5: detected capacity change from 0 to 114328 Jan 13 21:32:19.709613 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:32:19.709969 (sd-merge)[1290]: Merged extensions into '/usr'. Jan 13 21:32:19.712846 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:32:19.712859 systemd[1]: Reloading... Jan 13 21:32:19.752296 zram_generator::config[1318]: No configuration found. Jan 13 21:32:19.770573 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:32:19.840807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:19.882371 systemd[1]: Reloading finished in 169 ms. Jan 13 21:32:19.898711 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:32:19.899823 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:32:19.917391 systemd[1]: Starting ensure-sysext.service... Jan 13 21:32:19.918845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:32:19.922220 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:32:19.922228 systemd[1]: Reloading... Jan 13 21:32:19.933622 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:32:19.933920 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:32:19.934655 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:32:19.934920 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 13 21:32:19.934985 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 13 21:32:19.937255 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:32:19.937269 systemd-tmpfiles[1360]: Skipping /boot Jan 13 21:32:19.946481 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:32:19.946495 systemd-tmpfiles[1360]: Skipping /boot Jan 13 21:32:19.956286 zram_generator::config[1388]: No configuration found. Jan 13 21:32:20.042035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:20.083702 systemd[1]: Reloading finished in 161 ms. Jan 13 21:32:20.095706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:20.111545 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:20.113543 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:32:20.115341 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:32:20.118395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:32:20.122411 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:32:20.126932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:20.135965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:20.141820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:20.149470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:20.150570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:20.151429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:32:20.152777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:20.152899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:20.154319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:20.154440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:20.155788 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:20.155965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:20.166863 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:32:20.169369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:20.170925 augenrules[1463]: No rules Jan 13 21:32:20.171514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:20.173499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:20.175995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:20.176921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:20.178062 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:32:20.181437 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:32:20.182505 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:20.183805 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:32:20.185209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:20.185351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:20.186526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:20.186659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:20.187867 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:20.188050 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:20.191959 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:32:20.196815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:20.205481 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:20.207487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:32:20.211202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:20.217495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:20.218380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:20.218505 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:32:20.219319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:20.219498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:20.220727 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:32:20.220856 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:32:20.222049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:20.222190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:20.223499 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:20.223724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:20.225214 systemd-resolved[1434]: Positive Trust Anchors: Jan 13 21:32:20.225232 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:32:20.225264 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:32:20.228523 systemd[1]: Finished ensure-sysext.service. Jan 13 21:32:20.230171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:32:20.230230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:32:20.230638 systemd-resolved[1434]: Defaulting to hostname 'linux'. Jan 13 21:32:20.238442 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:32:20.239335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:32:20.240209 systemd[1]: Reached target network.target - Network. Jan 13 21:32:20.240997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:20.277232 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:32:20.681492 systemd-resolved[1434]: Clock change detected. Flushing caches. Jan 13 21:32:20.681541 systemd-timesyncd[1502]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:32:20.681580 systemd-timesyncd[1502]: Initial clock synchronization to Mon 2025-01-13 21:32:20.681451 UTC. Jan 13 21:32:20.682115 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:32:20.682934 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:32:20.683842 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:32:20.684717 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:32:20.685643 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:32:20.685673 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:32:20.686335 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:32:20.687167 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:32:20.688035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:32:20.688905 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:32:20.690096 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:32:20.692057 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:32:20.693964 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:32:20.701727 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:32:20.702547 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:32:20.703262 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:32:20.704050 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:32:20.704090 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:32:20.704110 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:32:20.705075 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:32:20.706731 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:32:20.708322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:32:20.713008 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:32:20.713749 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:32:20.714722 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:32:20.721896 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:32:20.722559 jq[1508]: false Jan 13 21:32:20.726600 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:32:20.731902 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:32:20.736275 dbus-daemon[1507]: [system] SELinux support is enabled Jan 13 21:32:20.736641 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:32:20.741392 extend-filesystems[1509]: Found loop3 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found loop4 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found loop5 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda1 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda2 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda3 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found usr Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda4 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda6 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda7 Jan 13 21:32:20.741392 extend-filesystems[1509]: Found vda9 Jan 13 21:32:20.741392 extend-filesystems[1509]: Checking size of /dev/vda9 Jan 13 21:32:20.743447 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:32:20.754981 extend-filesystems[1509]: Resized partition /dev/vda9 Jan 13 21:32:20.756988 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:32:20.765084 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:32:20.765129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1238) Jan 13 21:32:20.758973 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:32:20.769953 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:32:20.771335 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:32:20.777670 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:32:20.777901 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:32:20.778123 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:32:20.778314 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:32:20.784050 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:32:20.784095 jq[1536]: true Jan 13 21:32:20.784391 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:32:20.784621 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:32:20.795616 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:32:20.795616 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:32:20.795616 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:32:20.805706 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Jan 13 21:32:20.796960 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:32:20.797181 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:32:20.800088 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:32:20.813915 jq[1541]: true Jan 13 21:32:20.815449 update_engine[1529]: I20250113 21:32:20.815240 1529 main.cc:92] Flatcar Update Engine starting Jan 13 21:32:20.817254 update_engine[1529]: I20250113 21:32:20.816865 1529 update_check_scheduler.cc:74] Next update check in 7m12s Jan 13 21:32:20.820044 tar[1538]: linux-arm64/helm Jan 13 21:32:20.826450 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:32:20.827811 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:32:20.827837 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:32:20.828970 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:32:20.828997 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:32:20.830504 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:32:20.836993 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:32:20.840299 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:32:20.840741 systemd-logind[1522]: New seat seat0. Jan 13 21:32:20.845297 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:32:20.865246 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:32:20.866159 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:32:20.868442 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:32:20.884965 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:32:21.000906 containerd[1542]: time="2025-01-13T21:32:20.999898430Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:32:21.031065 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:32:21.033094 containerd[1542]: time="2025-01-13T21:32:21.033049750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035014 containerd[1542]: time="2025-01-13T21:32:21.034915030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035014 containerd[1542]: time="2025-01-13T21:32:21.035005630Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:32:21.035096 containerd[1542]: time="2025-01-13T21:32:21.035023910Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:32:21.035247 containerd[1542]: time="2025-01-13T21:32:21.035215790Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:32:21.035247 containerd[1542]: time="2025-01-13T21:32:21.035242590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035365 containerd[1542]: time="2025-01-13T21:32:21.035345150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035386 containerd[1542]: time="2025-01-13T21:32:21.035367150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035730 containerd[1542]: time="2025-01-13T21:32:21.035697030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035730 containerd[1542]: time="2025-01-13T21:32:21.035724990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035768 containerd[1542]: time="2025-01-13T21:32:21.035740790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035768 containerd[1542]: time="2025-01-13T21:32:21.035750590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.035861 containerd[1542]: time="2025-01-13T21:32:21.035841670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.036067 containerd[1542]: time="2025-01-13T21:32:21.036048190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:21.036196 containerd[1542]: time="2025-01-13T21:32:21.036180390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:21.036216 containerd[1542]: time="2025-01-13T21:32:21.036202030Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:32:21.036359 containerd[1542]: time="2025-01-13T21:32:21.036285390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:32:21.036359 containerd[1542]: time="2025-01-13T21:32:21.036331870Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:32:21.039286 containerd[1542]: time="2025-01-13T21:32:21.039260270Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:32:21.039338 containerd[1542]: time="2025-01-13T21:32:21.039307790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:32:21.039338 containerd[1542]: time="2025-01-13T21:32:21.039326150Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:32:21.039382 containerd[1542]: time="2025-01-13T21:32:21.039341270Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:32:21.039382 containerd[1542]: time="2025-01-13T21:32:21.039354510Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:32:21.039488 containerd[1542]: time="2025-01-13T21:32:21.039470510Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:32:21.039844 containerd[1542]: time="2025-01-13T21:32:21.039817990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:32:21.039985 containerd[1542]: time="2025-01-13T21:32:21.039968030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:32:21.040007 containerd[1542]: time="2025-01-13T21:32:21.039990390Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:32:21.040025 containerd[1542]: time="2025-01-13T21:32:21.040005150Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:32:21.040025 containerd[1542]: time="2025-01-13T21:32:21.040019910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040068 containerd[1542]: time="2025-01-13T21:32:21.040033070Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040068 containerd[1542]: time="2025-01-13T21:32:21.040045790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040215 containerd[1542]: time="2025-01-13T21:32:21.040195390Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040222790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040245830Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040263470Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040281670Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040305710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040324350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040343630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040362990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040376990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040394790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040415790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040432390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040450510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040641 containerd[1542]: time="2025-01-13T21:32:21.040551590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040576870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040596510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040613230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040634990Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040665150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040682830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040698910Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040810390Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040831830Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040847510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040881590Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:32:21.040905 containerd[1542]: time="2025-01-13T21:32:21.040897030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.041100 containerd[1542]: time="2025-01-13T21:32:21.040913830Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:32:21.041100 containerd[1542]: time="2025-01-13T21:32:21.040924630Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:32:21.041100 containerd[1542]: time="2025-01-13T21:32:21.040938950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:32:21.041490 containerd[1542]: time="2025-01-13T21:32:21.041223110Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:32:21.041490 containerd[1542]: time="2025-01-13T21:32:21.041293670Z" level=info msg="Connect containerd service" Jan 13 21:32:21.041490 containerd[1542]: time="2025-01-13T21:32:21.041393150Z" level=info msg="using legacy CRI server" Jan 13 21:32:21.041898 containerd[1542]: time="2025-01-13T21:32:21.041629270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:32:21.041898 containerd[1542]: time="2025-01-13T21:32:21.041769710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:32:21.042506 containerd[1542]: time="2025-01-13T21:32:21.042476070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:32:21.042682 containerd[1542]: time="2025-01-13T21:32:21.042656190Z" level=info msg="Start subscribing containerd event" Jan 13 21:32:21.042708 containerd[1542]: time="2025-01-13T21:32:21.042698870Z" level=info msg="Start recovering state" Jan 13 21:32:21.042981 containerd[1542]: time="2025-01-13T21:32:21.042924670Z" level=info msg="Start event monitor" Jan 13 21:32:21.042981 containerd[1542]: time="2025-01-13T21:32:21.042943230Z" level=info msg="Start snapshots syncer" Jan 13 21:32:21.042981 containerd[1542]: time="2025-01-13T21:32:21.042956870Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:32:21.042981 containerd[1542]: time="2025-01-13T21:32:21.042964630Z" level=info msg="Start streaming server" Jan 13 21:32:21.043664 containerd[1542]: time="2025-01-13T21:32:21.043396830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:32:21.043664 containerd[1542]: time="2025-01-13T21:32:21.043454990Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:32:21.043664 containerd[1542]: time="2025-01-13T21:32:21.043511910Z" level=info msg="containerd successfully booted in 0.045447s" Jan 13 21:32:21.043613 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:32:21.049658 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:32:21.057067 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:32:21.062026 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:32:21.062228 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:32:21.064875 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:32:21.077138 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:32:21.088071 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:32:21.089902 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:32:21.090938 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:32:21.166008 tar[1538]: linux-arm64/LICENSE Jan 13 21:32:21.166085 tar[1538]: linux-arm64/README.md Jan 13 21:32:21.178953 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:32:21.712113 systemd-networkd[1229]: eth0: Gained IPv6LL Jan 13 21:32:21.714784 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:32:21.716710 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:32:21.730059 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:32:21.732093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:21.733845 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:32:21.746813 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:32:21.747079 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:32:21.748609 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:32:21.753955 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:32:22.202169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:22.203744 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:32:22.206145 systemd[1]: Startup finished in 5.081s (kernel) + 3.571s (userspace) = 8.653s. Jan 13 21:32:22.206340 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:22.711582 kubelet[1643]: E0113 21:32:22.711485 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:22.714182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:22.714364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:26.261798 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:32:26.273068 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:48546.service - OpenSSH per-connection server daemon (10.0.0.1:48546). Jan 13 21:32:26.326027 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 48546 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:26.327730 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:26.338213 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:32:26.348044 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:32:26.350037 systemd-logind[1522]: New session 1 of user core. Jan 13 21:32:26.357551 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:32:26.359656 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:32:26.365396 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:32:26.432921 systemd[1663]: Queued start job for default target default.target. Jan 13 21:32:26.433238 systemd[1663]: Created slice app.slice - User Application Slice. Jan 13 21:32:26.433255 systemd[1663]: Reached target paths.target - Paths. Jan 13 21:32:26.433266 systemd[1663]: Reached target timers.target - Timers. Jan 13 21:32:26.443933 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:32:26.449111 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:32:26.449169 systemd[1663]: Reached target sockets.target - Sockets. Jan 13 21:32:26.449181 systemd[1663]: Reached target basic.target - Basic System. Jan 13 21:32:26.449216 systemd[1663]: Reached target default.target - Main User Target. Jan 13 21:32:26.449239 systemd[1663]: Startup finished in 79ms. Jan 13 21:32:26.449501 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:32:26.450698 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:32:26.507251 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:48548.service - OpenSSH per-connection server daemon (10.0.0.1:48548). Jan 13 21:32:26.533868 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 48548 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:26.534953 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:26.539065 systemd-logind[1522]: New session 2 of user core. Jan 13 21:32:26.548222 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:32:26.598000 sshd[1675]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:26.606178 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:48560.service - OpenSSH per-connection server daemon (10.0.0.1:48560). Jan 13 21:32:26.606801 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:48548.service: Deactivated successfully. Jan 13 21:32:26.608106 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:32:26.608760 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:32:26.609916 systemd-logind[1522]: Removed session 2. Jan 13 21:32:26.632425 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 48560 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:26.633533 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:26.636902 systemd-logind[1522]: New session 3 of user core. Jan 13 21:32:26.656140 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:32:26.702876 sshd[1680]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:26.716134 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:48562.service - OpenSSH per-connection server daemon (10.0.0.1:48562). Jan 13 21:32:26.716588 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:48560.service: Deactivated successfully. Jan 13 21:32:26.717774 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:32:26.718412 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:32:26.719600 systemd-logind[1522]: Removed session 3. Jan 13 21:32:26.742860 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 48562 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:26.743915 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:26.747575 systemd-logind[1522]: New session 4 of user core. Jan 13 21:32:26.759169 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:32:26.809679 sshd[1688]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:26.819148 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). Jan 13 21:32:26.819503 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:48562.service: Deactivated successfully. Jan 13 21:32:26.821293 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:32:26.821644 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:32:26.823182 systemd-logind[1522]: Removed session 4. Jan 13 21:32:26.845395 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:26.846476 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:26.849977 systemd-logind[1522]: New session 5 of user core. Jan 13 21:32:26.872186 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:32:26.931731 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:32:26.932059 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:26.948678 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:26.950173 sshd[1696]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:26.967074 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:48584.service - OpenSSH per-connection server daemon (10.0.0.1:48584). Jan 13 21:32:26.968179 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:48578.service: Deactivated successfully. Jan 13 21:32:26.969553 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:32:26.970191 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:32:26.971870 systemd-logind[1522]: Removed session 5. Jan 13 21:32:26.994275 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 48584 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:26.995380 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:26.999023 systemd-logind[1522]: New session 6 of user core. Jan 13 21:32:27.010101 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:32:27.059203 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:32:27.059452 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:27.062023 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:27.066564 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:32:27.067092 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:27.082062 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:27.083195 auditctl[1716]: No rules Jan 13 21:32:27.083912 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:32:27.084123 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:27.085849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:27.108022 augenrules[1735]: No rules Jan 13 21:32:27.109159 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:27.110065 sudo[1712]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:27.111343 sshd[1706]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:27.126057 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:48598.service - OpenSSH per-connection server daemon (10.0.0.1:48598). Jan 13 21:32:27.126510 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:48584.service: Deactivated successfully. Jan 13 21:32:27.127819 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:32:27.128347 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:32:27.129440 systemd-logind[1522]: Removed session 6. Jan 13 21:32:27.152683 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 48598 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:32:27.153736 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:27.157423 systemd-logind[1522]: New session 7 of user core. Jan 13 21:32:27.169059 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:32:27.220244 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:32:27.220514 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:27.519128 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:32:27.519265 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:32:27.778774 dockerd[1766]: time="2025-01-13T21:32:27.778657870Z" level=info msg="Starting up" Jan 13 21:32:28.004464 dockerd[1766]: time="2025-01-13T21:32:28.004407910Z" level=info msg="Loading containers: start." Jan 13 21:32:28.092903 kernel: Initializing XFRM netlink socket Jan 13 21:32:28.149440 systemd-networkd[1229]: docker0: Link UP Jan 13 21:32:28.162972 dockerd[1766]: time="2025-01-13T21:32:28.162922870Z" level=info msg="Loading containers: done." Jan 13 21:32:28.173728 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3246437315-merged.mount: Deactivated successfully. Jan 13 21:32:28.174903 dockerd[1766]: time="2025-01-13T21:32:28.174844870Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:32:28.174991 dockerd[1766]: time="2025-01-13T21:32:28.174961110Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:32:28.175084 dockerd[1766]: time="2025-01-13T21:32:28.175055870Z" level=info msg="Daemon has completed initialization" Jan 13 21:32:28.202060 dockerd[1766]: time="2025-01-13T21:32:28.201924110Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:32:28.202391 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:32:28.993339 containerd[1542]: time="2025-01-13T21:32:28.993290910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:32:29.760415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216432064.mount: Deactivated successfully. Jan 13 21:32:31.696986 containerd[1542]: time="2025-01-13T21:32:31.696932150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:31.697490 containerd[1542]: time="2025-01-13T21:32:31.697454910Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 21:32:31.698299 containerd[1542]: time="2025-01-13T21:32:31.698243670Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:31.701378 containerd[1542]: time="2025-01-13T21:32:31.701317270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:31.702529 containerd[1542]: time="2025-01-13T21:32:31.702507310Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.70917056s" Jan 13 21:32:31.702825 containerd[1542]: time="2025-01-13T21:32:31.702557390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 21:32:31.720343 containerd[1542]: time="2025-01-13T21:32:31.720313350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:32:32.964795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:32:32.975003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:33.058680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:33.062143 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:33.101113 kubelet[1992]: E0113 21:32:33.101045 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:33.104392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:33.104580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:34.151149 containerd[1542]: time="2025-01-13T21:32:34.151094950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:34.151565 containerd[1542]: time="2025-01-13T21:32:34.151501390Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 21:32:34.152476 containerd[1542]: time="2025-01-13T21:32:34.152434750Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:34.155562 containerd[1542]: time="2025-01-13T21:32:34.155526510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:34.156717 containerd[1542]: time="2025-01-13T21:32:34.156652270Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.4362982s" Jan 13 21:32:34.156751 containerd[1542]: time="2025-01-13T21:32:34.156714030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 21:32:34.174710 containerd[1542]: time="2025-01-13T21:32:34.174675350Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:32:35.430311 containerd[1542]: time="2025-01-13T21:32:35.430216910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:35.430816 containerd[1542]: time="2025-01-13T21:32:35.430777710Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 21:32:35.431695 containerd[1542]: time="2025-01-13T21:32:35.431648790Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:35.434468 containerd[1542]: time="2025-01-13T21:32:35.434404950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:35.435638 containerd[1542]: time="2025-01-13T21:32:35.435554230Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.26084076s" Jan 13 21:32:35.435638 containerd[1542]: time="2025-01-13T21:32:35.435593190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 21:32:35.454019 containerd[1542]: time="2025-01-13T21:32:35.453971750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:32:36.590415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573240267.mount: Deactivated successfully. Jan 13 21:32:36.913033 containerd[1542]: time="2025-01-13T21:32:36.912916030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:36.913875 containerd[1542]: time="2025-01-13T21:32:36.913617790Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 21:32:36.914626 containerd[1542]: time="2025-01-13T21:32:36.914585670Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:36.916880 containerd[1542]: time="2025-01-13T21:32:36.916843510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:36.917611 containerd[1542]: time="2025-01-13T21:32:36.917451550Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.4634376s" Jan 13 21:32:36.917611 containerd[1542]: time="2025-01-13T21:32:36.917496230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 21:32:36.935183 containerd[1542]: time="2025-01-13T21:32:36.935106310Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:32:37.493793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396980146.mount: Deactivated successfully. Jan 13 21:32:38.437815 containerd[1542]: time="2025-01-13T21:32:38.437758270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.438703 containerd[1542]: time="2025-01-13T21:32:38.438628510Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:32:38.439291 containerd[1542]: time="2025-01-13T21:32:38.439256150Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.442596 containerd[1542]: time="2025-01-13T21:32:38.442561110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.443628 containerd[1542]: time="2025-01-13T21:32:38.443586630Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.50844596s" Jan 13 21:32:38.443628 containerd[1542]: time="2025-01-13T21:32:38.443624750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:32:38.461729 containerd[1542]: time="2025-01-13T21:32:38.461699870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:32:38.922412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054191633.mount: Deactivated successfully. Jan 13 21:32:38.926049 containerd[1542]: time="2025-01-13T21:32:38.926013190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.926830 containerd[1542]: time="2025-01-13T21:32:38.926801670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 21:32:38.927519 containerd[1542]: time="2025-01-13T21:32:38.927458310Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.929612 containerd[1542]: time="2025-01-13T21:32:38.929561430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.930711 containerd[1542]: time="2025-01-13T21:32:38.930668590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 468.93404ms" Jan 13 21:32:38.930711 containerd[1542]: time="2025-01-13T21:32:38.930705110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:32:38.948996 containerd[1542]: time="2025-01-13T21:32:38.948968870Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:32:39.578405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435332095.mount: Deactivated successfully. Jan 13 21:32:41.857238 containerd[1542]: time="2025-01-13T21:32:41.857175270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:41.857706 containerd[1542]: time="2025-01-13T21:32:41.857673830Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 21:32:41.858603 containerd[1542]: time="2025-01-13T21:32:41.858580910Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:41.861679 containerd[1542]: time="2025-01-13T21:32:41.861640550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:41.864000 containerd[1542]: time="2025-01-13T21:32:41.863954670Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.91495004s" Jan 13 21:32:41.864000 containerd[1542]: time="2025-01-13T21:32:41.863988710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 21:32:43.354778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:32:43.364008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:43.444504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:43.448818 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:43.492735 kubelet[2230]: E0113 21:32:43.492667 2230 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:43.495551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:43.495724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:47.476675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:47.486064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:47.501284 systemd[1]: Reloading requested from client PID 2248 ('systemctl') (unit session-7.scope)... Jan 13 21:32:47.501299 systemd[1]: Reloading... Jan 13 21:32:47.555589 zram_generator::config[2287]: No configuration found. Jan 13 21:32:47.689118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:47.737318 systemd[1]: Reloading finished in 235 ms. Jan 13 21:32:47.779750 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:32:47.779812 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:32:47.780165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:47.782167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:47.871551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:47.875105 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:32:47.913913 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:47.913913 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:32:47.913913 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:47.914281 kubelet[2345]: I0113 21:32:47.913966 2345 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:32:49.019763 kubelet[2345]: I0113 21:32:49.019720 2345 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:32:49.019763 kubelet[2345]: I0113 21:32:49.019755 2345 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:32:49.020143 kubelet[2345]: I0113 21:32:49.019961 2345 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:32:49.048972 kubelet[2345]: I0113 21:32:49.048824 2345 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:32:49.052228 kubelet[2345]: E0113 21:32:49.052202 2345 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.057692 kubelet[2345]: I0113 21:32:49.057663 2345 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:32:49.058148 kubelet[2345]: I0113 21:32:49.058130 2345 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:32:49.058408 kubelet[2345]: I0113 21:32:49.058387 2345 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:32:49.058690 kubelet[2345]: I0113 21:32:49.058517 2345 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:32:49.058690 kubelet[2345]: I0113 21:32:49.058533 2345 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:32:49.059772 kubelet[2345]: I0113 21:32:49.059604 2345 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:49.061816 kubelet[2345]: I0113 21:32:49.061796 2345 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:32:49.061946 kubelet[2345]: I0113 21:32:49.061935 2345 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:32:49.062035 kubelet[2345]: I0113 21:32:49.062024 2345 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:32:49.062085 kubelet[2345]: I0113 21:32:49.062077 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:32:49.062281 kubelet[2345]: W0113 21:32:49.062224 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.062346 kubelet[2345]: E0113 21:32:49.062289 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.063452 kubelet[2345]: W0113 21:32:49.062653 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.063452 kubelet[2345]: E0113 21:32:49.062697 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.063811 kubelet[2345]: I0113 21:32:49.063790 2345 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:32:49.064258 kubelet[2345]: I0113 21:32:49.064245 2345 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:32:49.064886 kubelet[2345]: W0113 21:32:49.064850 2345 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:32:49.065683 kubelet[2345]: I0113 21:32:49.065670 2345 server.go:1256] "Started kubelet" Jan 13 21:32:49.068269 kubelet[2345]: I0113 21:32:49.065770 2345 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:32:49.068269 kubelet[2345]: I0113 21:32:49.066528 2345 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:32:49.068269 kubelet[2345]: I0113 21:32:49.066543 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:32:49.068269 kubelet[2345]: I0113 21:32:49.066734 2345 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:32:49.073625 kubelet[2345]: I0113 21:32:49.073609 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:32:49.073826 kubelet[2345]: I0113 21:32:49.073814 2345 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:32:49.074063 kubelet[2345]: I0113 21:32:49.074045 2345 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:32:49.074184 kubelet[2345]: I0113 21:32:49.074172 2345 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:32:49.074530 kubelet[2345]: W0113 21:32:49.074495 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.074630 kubelet[2345]: E0113 21:32:49.074609 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.074838 kubelet[2345]: E0113 21:32:49.074685 2345 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5e0204a57486 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:32:49.06564519 +0000 UTC m=+1.187293441,LastTimestamp:2025-01-13 21:32:49.06564519 +0000 UTC m=+1.187293441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:32:49.075006 kubelet[2345]: E0113 21:32:49.074895 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Jan 13 21:32:49.075394 kubelet[2345]: E0113 21:32:49.075378 2345 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:32:49.075599 kubelet[2345]: I0113 21:32:49.075475 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:32:49.076496 kubelet[2345]: I0113 21:32:49.076476 2345 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:32:49.076496 kubelet[2345]: I0113 21:32:49.076494 2345 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:32:49.090156 kubelet[2345]: I0113 21:32:49.090041 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:32:49.091378 kubelet[2345]: I0113 21:32:49.091360 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:32:49.091893 kubelet[2345]: I0113 21:32:49.091466 2345 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:32:49.091893 kubelet[2345]: I0113 21:32:49.091488 2345 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:32:49.091893 kubelet[2345]: E0113 21:32:49.091533 2345 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:32:49.092194 kubelet[2345]: W0113 21:32:49.092159 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.092294 kubelet[2345]: E0113 21:32:49.092284 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.093433 kubelet[2345]: I0113 21:32:49.093410 2345 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:32:49.093496 kubelet[2345]: I0113 21:32:49.093439 2345 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:32:49.093496 kubelet[2345]: I0113 21:32:49.093456 2345 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:49.166981 kubelet[2345]: I0113 21:32:49.166934 2345 policy_none.go:49] "None policy: Start" Jan 13 21:32:49.167722 kubelet[2345]: I0113 21:32:49.167699 2345 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:32:49.167797 kubelet[2345]: I0113 21:32:49.167743 2345 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:32:49.171995 kubelet[2345]: I0113 21:32:49.171963 2345 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:32:49.172239 kubelet[2345]: I0113 21:32:49.172209 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:32:49.173667 kubelet[2345]: E0113 21:32:49.173646 2345 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:32:49.175625 kubelet[2345]: I0113 21:32:49.175596 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:32:49.175988 kubelet[2345]: E0113 21:32:49.175959 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 13 21:32:49.192152 kubelet[2345]: I0113 21:32:49.192107 2345 topology_manager.go:215] "Topology Admit Handler" podUID="38ffdd8822eac53a18042d29a55cf184" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:32:49.192947 kubelet[2345]: I0113 21:32:49.192923 2345 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:32:49.193724 kubelet[2345]: I0113 21:32:49.193699 2345 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:32:49.275200 kubelet[2345]: I0113 21:32:49.275099 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38ffdd8822eac53a18042d29a55cf184-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"38ffdd8822eac53a18042d29a55cf184\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:49.275200 kubelet[2345]: I0113 21:32:49.275187 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38ffdd8822eac53a18042d29a55cf184-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"38ffdd8822eac53a18042d29a55cf184\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:49.275328 kubelet[2345]: I0113 21:32:49.275244 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:49.275328 kubelet[2345]: I0113 21:32:49.275277 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:49.275328 kubelet[2345]: I0113 21:32:49.275302 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:49.275328 kubelet[2345]: I0113 21:32:49.275322 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:32:49.275413 kubelet[2345]: I0113 21:32:49.275356 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38ffdd8822eac53a18042d29a55cf184-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"38ffdd8822eac53a18042d29a55cf184\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:49.275413 kubelet[2345]: I0113 21:32:49.275375 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:49.275413 kubelet[2345]: I0113 21:32:49.275395 2345 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:49.276302 kubelet[2345]: E0113 21:32:49.276162 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Jan 13 21:32:49.377891 kubelet[2345]: I0113 21:32:49.377660 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:32:49.378114 kubelet[2345]: E0113 21:32:49.378089 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 13 21:32:49.497702 kubelet[2345]: E0113 21:32:49.497654 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:49.497991 kubelet[2345]: E0113 21:32:49.497662 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:49.498296 containerd[1542]: time="2025-01-13T21:32:49.498242390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:49.498596 kubelet[2345]: E0113 21:32:49.498459 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:49.498629 containerd[1542]: time="2025-01-13T21:32:49.498594470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:49.498958 containerd[1542]: time="2025-01-13T21:32:49.498916030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:38ffdd8822eac53a18042d29a55cf184,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:49.677414 kubelet[2345]: E0113 21:32:49.677303 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Jan 13 21:32:49.780100 kubelet[2345]: I0113 21:32:49.780065 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:32:49.780460 kubelet[2345]: E0113 21:32:49.780427 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 13 21:32:49.885281 kubelet[2345]: W0113 21:32:49.885180 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.885281 kubelet[2345]: E0113 21:32:49.885273 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.929817 kubelet[2345]: W0113 21:32:49.929703 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:49.929817 kubelet[2345]: E0113 21:32:49.929763 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:50.067031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260682621.mount: Deactivated successfully. Jan 13 21:32:50.070889 containerd[1542]: time="2025-01-13T21:32:50.070828870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.072313 containerd[1542]: time="2025-01-13T21:32:50.072284150Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.073564 containerd[1542]: time="2025-01-13T21:32:50.073532790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:32:50.074092 containerd[1542]: time="2025-01-13T21:32:50.074057590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:32:50.075385 containerd[1542]: time="2025-01-13T21:32:50.075306230Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.076358 containerd[1542]: time="2025-01-13T21:32:50.076330830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.077889 containerd[1542]: time="2025-01-13T21:32:50.076674630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:32:50.079202 containerd[1542]: time="2025-01-13T21:32:50.079157110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.081544 containerd[1542]: time="2025-01-13T21:32:50.081481550Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.82164ms" Jan 13 21:32:50.082184 containerd[1542]: time="2025-01-13T21:32:50.082094310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.12332ms" Jan 13 21:32:50.082964 containerd[1542]: time="2025-01-13T21:32:50.082936550Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.61256ms" Jan 13 21:32:50.117174 kubelet[2345]: W0113 21:32:50.117137 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:50.117174 kubelet[2345]: E0113 21:32:50.117181 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:50.305794 containerd[1542]: time="2025-01-13T21:32:50.305226750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:50.305794 containerd[1542]: time="2025-01-13T21:32:50.305284350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:50.305794 containerd[1542]: time="2025-01-13T21:32:50.305300670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.305794 containerd[1542]: time="2025-01-13T21:32:50.305718110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:50.305794 containerd[1542]: time="2025-01-13T21:32:50.305784070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:50.305992 containerd[1542]: time="2025-01-13T21:32:50.305799310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.305992 containerd[1542]: time="2025-01-13T21:32:50.305919710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.307138 containerd[1542]: time="2025-01-13T21:32:50.306426870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.307138 containerd[1542]: time="2025-01-13T21:32:50.306932870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:50.307138 containerd[1542]: time="2025-01-13T21:32:50.306967190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:50.307138 containerd[1542]: time="2025-01-13T21:32:50.306977430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.307390 containerd[1542]: time="2025-01-13T21:32:50.307332110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.359929 containerd[1542]: time="2025-01-13T21:32:50.359895510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1220894b27a570ea42a9aa30b7b75c71478bb8b6fa28e11ca51bb9596996fd\"" Jan 13 21:32:50.361157 containerd[1542]: time="2025-01-13T21:32:50.361060150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f822f34fd09794f8e76e81e8ced502cd7d044f23d5cfc0afad278b44fe92429\"" Jan 13 21:32:50.362034 kubelet[2345]: E0113 21:32:50.361836 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:50.362034 kubelet[2345]: E0113 21:32:50.361934 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:50.363757 containerd[1542]: time="2025-01-13T21:32:50.363710990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:38ffdd8822eac53a18042d29a55cf184,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3eef45abd7b3edcadfe1839b8904af35161ac623f39d8c36cf4bcd6bc12e304\"" Jan 13 21:32:50.364512 kubelet[2345]: E0113 21:32:50.364447 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:50.365330 containerd[1542]: time="2025-01-13T21:32:50.365120510Z" level=info msg="CreateContainer within sandbox \"0f822f34fd09794f8e76e81e8ced502cd7d044f23d5cfc0afad278b44fe92429\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:32:50.365519 containerd[1542]: time="2025-01-13T21:32:50.365491910Z" level=info msg="CreateContainer within sandbox \"8c1220894b27a570ea42a9aa30b7b75c71478bb8b6fa28e11ca51bb9596996fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:32:50.366427 containerd[1542]: time="2025-01-13T21:32:50.366395470Z" level=info msg="CreateContainer within sandbox \"a3eef45abd7b3edcadfe1839b8904af35161ac623f39d8c36cf4bcd6bc12e304\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:32:50.379659 containerd[1542]: time="2025-01-13T21:32:50.379611510Z" level=info msg="CreateContainer within sandbox \"0f822f34fd09794f8e76e81e8ced502cd7d044f23d5cfc0afad278b44fe92429\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"343d71326ab6a98a5fac3cee0b7cd6870247a13ed58dac140de39e285ef59fc9\"" Jan 13 21:32:50.380420 containerd[1542]: time="2025-01-13T21:32:50.380392630Z" level=info msg="StartContainer for \"343d71326ab6a98a5fac3cee0b7cd6870247a13ed58dac140de39e285ef59fc9\"" Jan 13 21:32:50.384739 containerd[1542]: time="2025-01-13T21:32:50.384697790Z" level=info msg="CreateContainer within sandbox \"8c1220894b27a570ea42a9aa30b7b75c71478bb8b6fa28e11ca51bb9596996fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f70c64aa63159f16ace0ca72b90ffc816cd368d655eac4b16329449728f18351\"" Jan 13 21:32:50.385454 containerd[1542]: time="2025-01-13T21:32:50.385401230Z" level=info msg="StartContainer for \"f70c64aa63159f16ace0ca72b90ffc816cd368d655eac4b16329449728f18351\"" Jan 13 21:32:50.387577 containerd[1542]: time="2025-01-13T21:32:50.387544670Z" level=info msg="CreateContainer within sandbox \"a3eef45abd7b3edcadfe1839b8904af35161ac623f39d8c36cf4bcd6bc12e304\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de7137d6a1eb448ca5da2388bbe4a9705d565888da109b71af560e676a2a1742\"" Jan 13 21:32:50.388199 containerd[1542]: time="2025-01-13T21:32:50.388163710Z" level=info msg="StartContainer for \"de7137d6a1eb448ca5da2388bbe4a9705d565888da109b71af560e676a2a1742\"" Jan 13 21:32:50.462303 containerd[1542]: time="2025-01-13T21:32:50.462226950Z" level=info msg="StartContainer for \"f70c64aa63159f16ace0ca72b90ffc816cd368d655eac4b16329449728f18351\" returns successfully" Jan 13 21:32:50.462875 containerd[1542]: time="2025-01-13T21:32:50.462250030Z" level=info msg="StartContainer for \"de7137d6a1eb448ca5da2388bbe4a9705d565888da109b71af560e676a2a1742\" returns successfully" Jan 13 21:32:50.462875 containerd[1542]: time="2025-01-13T21:32:50.462256470Z" level=info msg="StartContainer for \"343d71326ab6a98a5fac3cee0b7cd6870247a13ed58dac140de39e285ef59fc9\" returns successfully" Jan 13 21:32:50.477911 kubelet[2345]: E0113 21:32:50.477890 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Jan 13 21:32:50.582753 kubelet[2345]: I0113 21:32:50.582299 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:32:50.582753 kubelet[2345]: E0113 21:32:50.582615 2345 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Jan 13 21:32:50.596289 kubelet[2345]: W0113 21:32:50.596234 2345 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:50.596412 kubelet[2345]: E0113 21:32:50.596307 2345 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Jan 13 21:32:51.108863 kubelet[2345]: E0113 21:32:51.101230 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:51.108863 kubelet[2345]: E0113 21:32:51.106595 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:51.108863 kubelet[2345]: E0113 21:32:51.108574 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:52.063866 kubelet[2345]: I0113 21:32:52.063824 2345 apiserver.go:52] "Watching apiserver" Jan 13 21:32:52.074605 kubelet[2345]: I0113 21:32:52.074579 2345 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:32:52.083301 kubelet[2345]: E0113 21:32:52.083266 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:32:52.108109 kubelet[2345]: E0113 21:32:52.108089 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:52.184164 kubelet[2345]: I0113 21:32:52.184136 2345 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:32:52.188975 kubelet[2345]: I0113 21:32:52.188949 2345 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:32:53.678699 kubelet[2345]: E0113 21:32:53.678657 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:54.112924 kubelet[2345]: E0113 21:32:54.112704 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:54.995068 systemd[1]: Reloading requested from client PID 2623 ('systemctl') (unit session-7.scope)... Jan 13 21:32:54.995086 systemd[1]: Reloading... Jan 13 21:32:55.053888 zram_generator::config[2665]: No configuration found. Jan 13 21:32:55.223943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:55.280445 systemd[1]: Reloading finished in 285 ms. Jan 13 21:32:55.306365 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:55.323051 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:32:55.323375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:55.332268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:55.410243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:55.415385 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:32:55.463369 kubelet[2714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:55.463369 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:32:55.463369 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:55.463737 kubelet[2714]: I0113 21:32:55.463415 2714 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:32:55.467500 kubelet[2714]: I0113 21:32:55.467457 2714 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:32:55.467500 kubelet[2714]: I0113 21:32:55.467480 2714 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:32:55.467698 kubelet[2714]: I0113 21:32:55.467635 2714 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:32:55.469212 kubelet[2714]: I0113 21:32:55.469056 2714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:32:55.470867 kubelet[2714]: I0113 21:32:55.470827 2714 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:32:55.476381 kubelet[2714]: I0113 21:32:55.476364 2714 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:32:55.476728 kubelet[2714]: I0113 21:32:55.476717 2714 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:32:55.476898 kubelet[2714]: I0113 21:32:55.476879 2714 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:32:55.477006 kubelet[2714]: I0113 21:32:55.476907 2714 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:32:55.477006 kubelet[2714]: I0113 21:32:55.476920 2714 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:32:55.477006 kubelet[2714]: I0113 21:32:55.476947 2714 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:55.477093 kubelet[2714]: I0113 21:32:55.477033 2714 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:32:55.477093 kubelet[2714]: I0113 21:32:55.477046 2714 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:32:55.477093 kubelet[2714]: I0113 21:32:55.477064 2714 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:32:55.477093 kubelet[2714]: I0113 21:32:55.477077 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:32:55.477898 kubelet[2714]: I0113 21:32:55.477730 2714 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:32:55.479225 kubelet[2714]: I0113 21:32:55.479157 2714 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:32:55.480127 kubelet[2714]: I0113 21:32:55.480111 2714 server.go:1256] "Started kubelet" Jan 13 21:32:55.480771 kubelet[2714]: I0113 21:32:55.480755 2714 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:32:55.480982 kubelet[2714]: I0113 21:32:55.480968 2714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:32:55.481823 kubelet[2714]: I0113 21:32:55.481808 2714 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:32:55.482292 kubelet[2714]: I0113 21:32:55.482271 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:32:55.483713 kubelet[2714]: I0113 21:32:55.483693 2714 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:32:55.493893 kubelet[2714]: I0113 21:32:55.493809 2714 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:32:55.495135 kubelet[2714]: I0113 21:32:55.494232 2714 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:32:55.496989 kubelet[2714]: I0113 21:32:55.494886 2714 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:32:55.497084 kubelet[2714]: I0113 21:32:55.497061 2714 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:32:55.497383 kubelet[2714]: I0113 21:32:55.495584 2714 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:32:55.497383 kubelet[2714]: E0113 21:32:55.495657 2714 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:32:55.497383 kubelet[2714]: I0113 21:32:55.497267 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:32:55.498318 kubelet[2714]: I0113 21:32:55.498302 2714 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:32:55.499926 kubelet[2714]: I0113 21:32:55.499627 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:32:55.499926 kubelet[2714]: I0113 21:32:55.499651 2714 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:32:55.499926 kubelet[2714]: I0113 21:32:55.499667 2714 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:32:55.499926 kubelet[2714]: E0113 21:32:55.499712 2714 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:32:55.530666 sudo[2745]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:32:55.531368 sudo[2745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:32:55.539326 kubelet[2714]: I0113 21:32:55.539303 2714 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:32:55.539326 kubelet[2714]: I0113 21:32:55.539323 2714 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:32:55.539436 kubelet[2714]: I0113 21:32:55.539342 2714 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:55.539483 kubelet[2714]: I0113 21:32:55.539467 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:32:55.539511 kubelet[2714]: I0113 21:32:55.539489 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:32:55.539511 kubelet[2714]: I0113 21:32:55.539495 2714 policy_none.go:49] "None policy: Start" Jan 13 21:32:55.540229 kubelet[2714]: I0113 21:32:55.540017 2714 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:32:55.540229 kubelet[2714]: I0113 21:32:55.540043 2714 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:32:55.540229 kubelet[2714]: I0113 21:32:55.540181 2714 state_mem.go:75] "Updated machine memory state" Jan 13 21:32:55.546696 kubelet[2714]: I0113 21:32:55.546663 2714 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:32:55.547612 kubelet[2714]: I0113 21:32:55.547585 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:32:55.597183 kubelet[2714]: I0113 21:32:55.597160 2714 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:32:55.601234 kubelet[2714]: I0113 21:32:55.600287 2714 topology_manager.go:215] "Topology Admit Handler" podUID="38ffdd8822eac53a18042d29a55cf184" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:32:55.601234 kubelet[2714]: I0113 21:32:55.600361 2714 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:32:55.601234 kubelet[2714]: I0113 21:32:55.600413 2714 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:32:55.602723 kubelet[2714]: I0113 21:32:55.602677 2714 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:32:55.602789 kubelet[2714]: I0113 21:32:55.602770 2714 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:32:55.611452 kubelet[2714]: E0113 21:32:55.611418 2714 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:55.697816 kubelet[2714]: I0113 21:32:55.697530 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38ffdd8822eac53a18042d29a55cf184-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"38ffdd8822eac53a18042d29a55cf184\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:55.697816 kubelet[2714]: I0113 21:32:55.697568 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38ffdd8822eac53a18042d29a55cf184-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"38ffdd8822eac53a18042d29a55cf184\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:55.697816 kubelet[2714]: I0113 21:32:55.697590 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38ffdd8822eac53a18042d29a55cf184-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"38ffdd8822eac53a18042d29a55cf184\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:55.697816 kubelet[2714]: I0113 21:32:55.697608 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:55.697816 kubelet[2714]: I0113 21:32:55.697628 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:55.698036 kubelet[2714]: I0113 21:32:55.697648 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:32:55.698036 kubelet[2714]: I0113 21:32:55.697672 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:55.698036 kubelet[2714]: I0113 21:32:55.697690 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:55.698036 kubelet[2714]: I0113 21:32:55.697708 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:32:55.912708 kubelet[2714]: E0113 21:32:55.912497 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:55.912708 kubelet[2714]: E0113 21:32:55.912507 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:55.913192 kubelet[2714]: E0113 21:32:55.912966 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:55.955338 sudo[2745]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:56.477388 kubelet[2714]: I0113 21:32:56.477352 2714 apiserver.go:52] "Watching apiserver" Jan 13 21:32:56.496672 kubelet[2714]: I0113 21:32:56.496638 2714 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:32:56.508559 kubelet[2714]: E0113 21:32:56.508537 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:56.510554 kubelet[2714]: E0113 21:32:56.508820 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:56.513998 kubelet[2714]: E0113 21:32:56.513122 2714 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:32:56.514812 kubelet[2714]: E0113 21:32:56.514757 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:56.528076 kubelet[2714]: I0113 21:32:56.528037 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.527999176 podStartE2EDuration="3.527999176s" podCreationTimestamp="2025-01-13 21:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:56.527669171 +0000 UTC m=+1.107936687" watchObservedRunningTime="2025-01-13 21:32:56.527999176 +0000 UTC m=+1.108266692" Jan 13 21:32:56.535193 kubelet[2714]: I0113 21:32:56.535165 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.53512124 podStartE2EDuration="1.53512124s" podCreationTimestamp="2025-01-13 21:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:56.5351026 +0000 UTC m=+1.115370116" watchObservedRunningTime="2025-01-13 21:32:56.53512124 +0000 UTC m=+1.115388756" Jan 13 21:32:57.510417 kubelet[2714]: E0113 21:32:57.510390 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:32:59.354981 sudo[1748]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:59.357822 sshd[1742]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:59.360995 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:48598.service: Deactivated successfully. Jan 13 21:32:59.362966 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:32:59.363045 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:32:59.365505 systemd-logind[1522]: Removed session 7. Jan 13 21:33:02.416381 kubelet[2714]: E0113 21:33:02.416291 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:02.432285 kubelet[2714]: I0113 21:33:02.432253 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.43220377 podStartE2EDuration="7.43220377s" podCreationTimestamp="2025-01-13 21:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:56.541787137 +0000 UTC m=+1.122054653" watchObservedRunningTime="2025-01-13 21:33:02.43220377 +0000 UTC m=+7.012471286" Jan 13 21:33:02.517982 kubelet[2714]: E0113 21:33:02.516953 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:05.287898 kubelet[2714]: E0113 21:33:05.287603 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:05.520164 kubelet[2714]: E0113 21:33:05.520090 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:05.521113 kubelet[2714]: E0113 21:33:05.521089 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:06.522310 kubelet[2714]: E0113 21:33:06.521131 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:06.553789 update_engine[1529]: I20250113 21:33:06.553717 1529 update_attempter.cc:509] Updating boot flags... Jan 13 21:33:06.597873 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2797) Jan 13 21:33:06.619879 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2800) Jan 13 21:33:09.931511 kubelet[2714]: I0113 21:33:09.931473 2714 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:33:09.951154 kubelet[2714]: I0113 21:33:09.951096 2714 topology_manager.go:215] "Topology Admit Handler" podUID="35c6ef10-b062-4dc5-99c2-ebc1bfb104e8" podNamespace="kube-system" podName="kube-proxy-mrlvw" Jan 13 21:33:09.964992 kubelet[2714]: I0113 21:33:09.964849 2714 topology_manager.go:215] "Topology Admit Handler" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" podNamespace="kube-system" podName="cilium-zvl7h" Jan 13 21:33:09.973021 containerd[1542]: time="2025-01-13T21:33:09.972913423Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:33:09.974904 kubelet[2714]: I0113 21:33:09.974820 2714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:33:09.992089 kubelet[2714]: I0113 21:33:09.992048 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35c6ef10-b062-4dc5-99c2-ebc1bfb104e8-xtables-lock\") pod \"kube-proxy-mrlvw\" (UID: \"35c6ef10-b062-4dc5-99c2-ebc1bfb104e8\") " pod="kube-system/kube-proxy-mrlvw" Jan 13 21:33:09.992089 kubelet[2714]: I0113 21:33:09.992096 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-run\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:09.992253 kubelet[2714]: I0113 21:33:09.992118 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hostproc\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:09.992253 kubelet[2714]: I0113 21:33:09.992151 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35c6ef10-b062-4dc5-99c2-ebc1bfb104e8-kube-proxy\") pod \"kube-proxy-mrlvw\" (UID: \"35c6ef10-b062-4dc5-99c2-ebc1bfb104e8\") " pod="kube-system/kube-proxy-mrlvw" Jan 13 21:33:09.992253 kubelet[2714]: I0113 21:33:09.992198 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35c6ef10-b062-4dc5-99c2-ebc1bfb104e8-lib-modules\") pod \"kube-proxy-mrlvw\" (UID: \"35c6ef10-b062-4dc5-99c2-ebc1bfb104e8\") " pod="kube-system/kube-proxy-mrlvw" Jan 13 21:33:09.992253 kubelet[2714]: I0113 21:33:09.992241 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xj78\" (UniqueName: \"kubernetes.io/projected/35c6ef10-b062-4dc5-99c2-ebc1bfb104e8-kube-api-access-9xj78\") pod \"kube-proxy-mrlvw\" (UID: \"35c6ef10-b062-4dc5-99c2-ebc1bfb104e8\") " pod="kube-system/kube-proxy-mrlvw" Jan 13 21:33:09.992333 kubelet[2714]: I0113 21:33:09.992267 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-bpf-maps\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:09.992333 kubelet[2714]: I0113 21:33:09.992298 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cni-path\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:09.992374 kubelet[2714]: I0113 21:33:09.992349 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-cgroup\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.093612 kubelet[2714]: I0113 21:33:10.093542 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-xtables-lock\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.093779 kubelet[2714]: I0113 21:33:10.093766 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-config-path\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.093906 kubelet[2714]: I0113 21:33:10.093893 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hubble-tls\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.094057 kubelet[2714]: I0113 21:33:10.094027 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-etc-cni-netd\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.094099 kubelet[2714]: I0113 21:33:10.094075 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-clustermesh-secrets\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.094122 kubelet[2714]: I0113 21:33:10.094106 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-net\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.094169 kubelet[2714]: I0113 21:33:10.094154 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snv8b\" (UniqueName: \"kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-kube-api-access-snv8b\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.094216 kubelet[2714]: I0113 21:33:10.094197 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-lib-modules\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.094245 kubelet[2714]: I0113 21:33:10.094224 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-kernel\") pod \"cilium-zvl7h\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " pod="kube-system/cilium-zvl7h" Jan 13 21:33:10.258407 kubelet[2714]: E0113 21:33:10.258294 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:10.261635 containerd[1542]: time="2025-01-13T21:33:10.261269138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrlvw,Uid:35c6ef10-b062-4dc5-99c2-ebc1bfb104e8,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:10.276020 kubelet[2714]: E0113 21:33:10.275988 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:10.277427 containerd[1542]: time="2025-01-13T21:33:10.276730629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvl7h,Uid:ade938a9-bcdd-4d31-8cf7-ce721d91fa37,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:10.384818 kubelet[2714]: I0113 21:33:10.384274 2714 topology_manager.go:215] "Topology Admit Handler" podUID="07bee16a-37bb-46b8-a66e-6b0ddb10dba8" podNamespace="kube-system" podName="cilium-operator-5cc964979-czzv6" Jan 13 21:33:10.398618 kubelet[2714]: I0113 21:33:10.398297 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7zt\" (UniqueName: \"kubernetes.io/projected/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-kube-api-access-gj7zt\") pod \"cilium-operator-5cc964979-czzv6\" (UID: \"07bee16a-37bb-46b8-a66e-6b0ddb10dba8\") " pod="kube-system/cilium-operator-5cc964979-czzv6" Jan 13 21:33:10.398618 kubelet[2714]: I0113 21:33:10.398347 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-cilium-config-path\") pod \"cilium-operator-5cc964979-czzv6\" (UID: \"07bee16a-37bb-46b8-a66e-6b0ddb10dba8\") " pod="kube-system/cilium-operator-5cc964979-czzv6" Jan 13 21:33:10.410879 containerd[1542]: time="2025-01-13T21:33:10.410289139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:10.410879 containerd[1542]: time="2025-01-13T21:33:10.410766062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:10.410879 containerd[1542]: time="2025-01-13T21:33:10.410790222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:10.411092 containerd[1542]: time="2025-01-13T21:33:10.410914502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:10.414659 containerd[1542]: time="2025-01-13T21:33:10.413211476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:10.414659 containerd[1542]: time="2025-01-13T21:33:10.413588918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:10.414659 containerd[1542]: time="2025-01-13T21:33:10.413605558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:10.414659 containerd[1542]: time="2025-01-13T21:33:10.413696919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:10.445269 containerd[1542]: time="2025-01-13T21:33:10.445229705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvl7h,Uid:ade938a9-bcdd-4d31-8cf7-ce721d91fa37,Namespace:kube-system,Attempt:0,} returns sandbox id \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\"" Jan 13 21:33:10.449677 kubelet[2714]: E0113 21:33:10.449649 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:10.453881 containerd[1542]: time="2025-01-13T21:33:10.453786076Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:33:10.457180 containerd[1542]: time="2025-01-13T21:33:10.457155016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrlvw,Uid:35c6ef10-b062-4dc5-99c2-ebc1bfb104e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"959fb315ef94566ce43470d5ecd4d19a58d9c8fb187aa6ed8c8bfab556c39b91\"" Jan 13 21:33:10.457804 kubelet[2714]: E0113 21:33:10.457787 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:10.460274 containerd[1542]: time="2025-01-13T21:33:10.460173674Z" level=info msg="CreateContainer within sandbox \"959fb315ef94566ce43470d5ecd4d19a58d9c8fb187aa6ed8c8bfab556c39b91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:33:10.485976 containerd[1542]: time="2025-01-13T21:33:10.485939226Z" level=info msg="CreateContainer within sandbox \"959fb315ef94566ce43470d5ecd4d19a58d9c8fb187aa6ed8c8bfab556c39b91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"510f9a612bd31d3763a03db802fa692ab35eed8b4ed8c4e1fec2a15f28fffe3b\"" Jan 13 21:33:10.487376 containerd[1542]: time="2025-01-13T21:33:10.486476429Z" level=info msg="StartContainer for \"510f9a612bd31d3763a03db802fa692ab35eed8b4ed8c4e1fec2a15f28fffe3b\"" Jan 13 21:33:10.534627 containerd[1542]: time="2025-01-13T21:33:10.533043024Z" level=info msg="StartContainer for \"510f9a612bd31d3763a03db802fa692ab35eed8b4ed8c4e1fec2a15f28fffe3b\" returns successfully" Jan 13 21:33:10.699927 kubelet[2714]: E0113 21:33:10.699885 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:10.700375 containerd[1542]: time="2025-01-13T21:33:10.700312133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-czzv6,Uid:07bee16a-37bb-46b8-a66e-6b0ddb10dba8,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:10.717802 containerd[1542]: time="2025-01-13T21:33:10.717387153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:10.718022 containerd[1542]: time="2025-01-13T21:33:10.717823396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:10.718022 containerd[1542]: time="2025-01-13T21:33:10.717840356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:10.718022 containerd[1542]: time="2025-01-13T21:33:10.717957797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:10.767424 containerd[1542]: time="2025-01-13T21:33:10.767383049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-czzv6,Uid:07bee16a-37bb-46b8-a66e-6b0ddb10dba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2\"" Jan 13 21:33:10.768258 kubelet[2714]: E0113 21:33:10.767934 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:11.541886 kubelet[2714]: E0113 21:33:11.540188 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:12.555062 kubelet[2714]: E0113 21:33:12.555020 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:13.926844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679098238.mount: Deactivated successfully. Jan 13 21:33:15.163987 containerd[1542]: time="2025-01-13T21:33:15.163936615Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:15.165006 containerd[1542]: time="2025-01-13T21:33:15.164756498Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650914" Jan 13 21:33:15.165990 containerd[1542]: time="2025-01-13T21:33:15.165681142Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:15.167354 containerd[1542]: time="2025-01-13T21:33:15.167318629Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.713480673s" Jan 13 21:33:15.167453 containerd[1542]: time="2025-01-13T21:33:15.167427710Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:33:15.169484 containerd[1542]: time="2025-01-13T21:33:15.169451718Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:33:15.173984 containerd[1542]: time="2025-01-13T21:33:15.173941658Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:33:15.204603 containerd[1542]: time="2025-01-13T21:33:15.204188427Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\"" Jan 13 21:33:15.204974 containerd[1542]: time="2025-01-13T21:33:15.204880070Z" level=info msg="StartContainer for \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\"" Jan 13 21:33:15.205187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402428195.mount: Deactivated successfully. Jan 13 21:33:15.246942 containerd[1542]: time="2025-01-13T21:33:15.246361008Z" level=info msg="StartContainer for \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\" returns successfully" Jan 13 21:33:15.465735 containerd[1542]: time="2025-01-13T21:33:15.453716735Z" level=info msg="shim disconnected" id=1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2 namespace=k8s.io Jan 13 21:33:15.465735 containerd[1542]: time="2025-01-13T21:33:15.465674626Z" level=warning msg="cleaning up after shim disconnected" id=1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2 namespace=k8s.io Jan 13 21:33:15.465735 containerd[1542]: time="2025-01-13T21:33:15.465689466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:15.550963 kubelet[2714]: I0113 21:33:15.550910 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mrlvw" podStartSLOduration=6.550829831 podStartE2EDuration="6.550829831s" podCreationTimestamp="2025-01-13 21:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:11.557904315 +0000 UTC m=+16.138171831" watchObservedRunningTime="2025-01-13 21:33:15.550829831 +0000 UTC m=+20.131097347" Jan 13 21:33:15.572669 kubelet[2714]: E0113 21:33:15.572624 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:15.576552 containerd[1542]: time="2025-01-13T21:33:15.575400496Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:33:15.591483 containerd[1542]: time="2025-01-13T21:33:15.591430564Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\"" Jan 13 21:33:15.592011 containerd[1542]: time="2025-01-13T21:33:15.591985727Z" level=info msg="StartContainer for \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\"" Jan 13 21:33:15.636255 containerd[1542]: time="2025-01-13T21:33:15.636196276Z" level=info msg="StartContainer for \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\" returns successfully" Jan 13 21:33:15.656692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:33:15.656974 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:33:15.657050 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:33:15.664198 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:33:15.678594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:33:15.691123 containerd[1542]: time="2025-01-13T21:33:15.691059791Z" level=info msg="shim disconnected" id=78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591 namespace=k8s.io Jan 13 21:33:15.691123 containerd[1542]: time="2025-01-13T21:33:15.691113911Z" level=warning msg="cleaning up after shim disconnected" id=78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591 namespace=k8s.io Jan 13 21:33:15.691123 containerd[1542]: time="2025-01-13T21:33:15.691122271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:16.203404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2-rootfs.mount: Deactivated successfully. Jan 13 21:33:16.576191 kubelet[2714]: E0113 21:33:16.576042 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:16.578663 containerd[1542]: time="2025-01-13T21:33:16.578584674Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:33:16.596029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975482644.mount: Deactivated successfully. Jan 13 21:33:16.598253 containerd[1542]: time="2025-01-13T21:33:16.598217793Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\"" Jan 13 21:33:16.598901 containerd[1542]: time="2025-01-13T21:33:16.598866356Z" level=info msg="StartContainer for \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\"" Jan 13 21:33:16.652928 containerd[1542]: time="2025-01-13T21:33:16.651020645Z" level=info msg="StartContainer for \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\" returns successfully" Jan 13 21:33:16.689364 containerd[1542]: time="2025-01-13T21:33:16.689307279Z" level=info msg="shim disconnected" id=79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32 namespace=k8s.io Jan 13 21:33:16.689364 containerd[1542]: time="2025-01-13T21:33:16.689361519Z" level=warning msg="cleaning up after shim disconnected" id=79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32 namespace=k8s.io Jan 13 21:33:16.689364 containerd[1542]: time="2025-01-13T21:33:16.689370559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:17.208954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32-rootfs.mount: Deactivated successfully. Jan 13 21:33:17.579527 kubelet[2714]: E0113 21:33:17.579435 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:17.584504 containerd[1542]: time="2025-01-13T21:33:17.584467724Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:33:17.603727 containerd[1542]: time="2025-01-13T21:33:17.603689676Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\"" Jan 13 21:33:17.604459 containerd[1542]: time="2025-01-13T21:33:17.604411039Z" level=info msg="StartContainer for \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\"" Jan 13 21:33:17.654460 containerd[1542]: time="2025-01-13T21:33:17.654419867Z" level=info msg="StartContainer for \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\" returns successfully" Jan 13 21:33:17.669362 containerd[1542]: time="2025-01-13T21:33:17.669304843Z" level=info msg="shim disconnected" id=67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b namespace=k8s.io Jan 13 21:33:17.669362 containerd[1542]: time="2025-01-13T21:33:17.669359643Z" level=warning msg="cleaning up after shim disconnected" id=67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b namespace=k8s.io Jan 13 21:33:17.669362 containerd[1542]: time="2025-01-13T21:33:17.669369683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:18.202561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b-rootfs.mount: Deactivated successfully. Jan 13 21:33:18.583862 kubelet[2714]: E0113 21:33:18.583759 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:18.586705 containerd[1542]: time="2025-01-13T21:33:18.586672756Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:33:18.597250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107999524.mount: Deactivated successfully. Jan 13 21:33:18.598208 containerd[1542]: time="2025-01-13T21:33:18.598000916Z" level=info msg="CreateContainer within sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\"" Jan 13 21:33:18.598907 containerd[1542]: time="2025-01-13T21:33:18.598844279Z" level=info msg="StartContainer for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\"" Jan 13 21:33:18.653970 containerd[1542]: time="2025-01-13T21:33:18.652272467Z" level=info msg="StartContainer for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" returns successfully" Jan 13 21:33:18.816745 kubelet[2714]: I0113 21:33:18.816698 2714 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:33:18.837406 kubelet[2714]: I0113 21:33:18.836412 2714 topology_manager.go:215] "Topology Admit Handler" podUID="7c321008-fe26-4dee-a04c-428562c06d0c" podNamespace="kube-system" podName="coredns-76f75df574-vq776" Jan 13 21:33:18.837406 kubelet[2714]: I0113 21:33:18.836600 2714 topology_manager.go:215] "Topology Admit Handler" podUID="368875e2-2437-4938-a844-ad4014035b87" podNamespace="kube-system" podName="coredns-76f75df574-774x2" Jan 13 21:33:18.855820 kubelet[2714]: I0113 21:33:18.854444 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj24k\" (UniqueName: \"kubernetes.io/projected/368875e2-2437-4938-a844-ad4014035b87-kube-api-access-nj24k\") pod \"coredns-76f75df574-774x2\" (UID: \"368875e2-2437-4938-a844-ad4014035b87\") " pod="kube-system/coredns-76f75df574-774x2" Jan 13 21:33:18.855820 kubelet[2714]: I0113 21:33:18.854484 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c321008-fe26-4dee-a04c-428562c06d0c-config-volume\") pod \"coredns-76f75df574-vq776\" (UID: \"7c321008-fe26-4dee-a04c-428562c06d0c\") " pod="kube-system/coredns-76f75df574-vq776" Jan 13 21:33:18.855820 kubelet[2714]: I0113 21:33:18.854508 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdgk\" (UniqueName: \"kubernetes.io/projected/7c321008-fe26-4dee-a04c-428562c06d0c-kube-api-access-ffdgk\") pod \"coredns-76f75df574-vq776\" (UID: \"7c321008-fe26-4dee-a04c-428562c06d0c\") " pod="kube-system/coredns-76f75df574-vq776" Jan 13 21:33:18.855820 kubelet[2714]: I0113 21:33:18.854533 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/368875e2-2437-4938-a844-ad4014035b87-config-volume\") pod \"coredns-76f75df574-774x2\" (UID: \"368875e2-2437-4938-a844-ad4014035b87\") " pod="kube-system/coredns-76f75df574-774x2" Jan 13 21:33:19.140368 kubelet[2714]: E0113 21:33:19.140268 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:19.141376 containerd[1542]: time="2025-01-13T21:33:19.141021920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vq776,Uid:7c321008-fe26-4dee-a04c-428562c06d0c,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:19.143056 kubelet[2714]: E0113 21:33:19.142971 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:19.143368 containerd[1542]: time="2025-01-13T21:33:19.143335887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-774x2,Uid:368875e2-2437-4938-a844-ad4014035b87,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:19.587953 kubelet[2714]: E0113 21:33:19.587587 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:19.608530 kubelet[2714]: I0113 21:33:19.608489 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zvl7h" podStartSLOduration=5.892832062 podStartE2EDuration="10.608451385s" podCreationTimestamp="2025-01-13 21:33:09 +0000 UTC" firstStartedPulling="2025-01-13 21:33:10.453384513 +0000 UTC m=+15.033651989" lastFinishedPulling="2025-01-13 21:33:15.169003796 +0000 UTC m=+19.749271312" observedRunningTime="2025-01-13 21:33:19.607994223 +0000 UTC m=+24.188261739" watchObservedRunningTime="2025-01-13 21:33:19.608451385 +0000 UTC m=+24.188718901" Jan 13 21:33:19.736171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818833888.mount: Deactivated successfully. Jan 13 21:33:20.086582 containerd[1542]: time="2025-01-13T21:33:20.085988986Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:20.086582 containerd[1542]: time="2025-01-13T21:33:20.086413227Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137734" Jan 13 21:33:20.087304 containerd[1542]: time="2025-01-13T21:33:20.087275670Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:20.089452 containerd[1542]: time="2025-01-13T21:33:20.089415517Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.919765638s" Jan 13 21:33:20.089507 containerd[1542]: time="2025-01-13T21:33:20.089455157Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:33:20.092192 containerd[1542]: time="2025-01-13T21:33:20.092159085Z" level=info msg="CreateContainer within sandbox \"cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:33:20.102575 containerd[1542]: time="2025-01-13T21:33:20.102539317Z" level=info msg="CreateContainer within sandbox \"cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\"" Jan 13 21:33:20.103014 containerd[1542]: time="2025-01-13T21:33:20.102891038Z" level=info msg="StartContainer for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\"" Jan 13 21:33:20.147209 containerd[1542]: time="2025-01-13T21:33:20.147164016Z" level=info msg="StartContainer for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" returns successfully" Jan 13 21:33:20.595723 kubelet[2714]: E0113 21:33:20.595685 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:20.597203 kubelet[2714]: E0113 21:33:20.597172 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:20.606204 kubelet[2714]: I0113 21:33:20.606168 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-czzv6" podStartSLOduration=1.284775016 podStartE2EDuration="10.606124038s" podCreationTimestamp="2025-01-13 21:33:10 +0000 UTC" firstStartedPulling="2025-01-13 21:33:10.768331735 +0000 UTC m=+15.348599251" lastFinishedPulling="2025-01-13 21:33:20.089680757 +0000 UTC m=+24.669948273" observedRunningTime="2025-01-13 21:33:20.605711757 +0000 UTC m=+25.185979273" watchObservedRunningTime="2025-01-13 21:33:20.606124038 +0000 UTC m=+25.186391554" Jan 13 21:33:21.597972 kubelet[2714]: E0113 21:33:21.597942 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:21.600762 kubelet[2714]: E0113 21:33:21.598462 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:23.364317 kubelet[2714]: E0113 21:33:23.364283 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:23.825604 systemd-networkd[1229]: cilium_host: Link UP Jan 13 21:33:23.826169 systemd-networkd[1229]: cilium_net: Link UP Jan 13 21:33:23.826172 systemd-networkd[1229]: cilium_net: Gained carrier Jan 13 21:33:23.826917 systemd-networkd[1229]: cilium_host: Gained carrier Jan 13 21:33:23.902911 systemd-networkd[1229]: cilium_vxlan: Link UP Jan 13 21:33:23.902919 systemd-networkd[1229]: cilium_vxlan: Gained carrier Jan 13 21:33:24.195883 kernel: NET: Registered PF_ALG protocol family Jan 13 21:33:24.216002 systemd-networkd[1229]: cilium_host: Gained IPv6LL Jan 13 21:33:24.744513 systemd-networkd[1229]: lxc_health: Link UP Jan 13 21:33:24.752009 systemd-networkd[1229]: lxc_health: Gained carrier Jan 13 21:33:24.753291 systemd-networkd[1229]: cilium_net: Gained IPv6LL Jan 13 21:33:25.137036 systemd-networkd[1229]: cilium_vxlan: Gained IPv6LL Jan 13 21:33:25.298897 systemd-networkd[1229]: lxc889fcbf5da4d: Link UP Jan 13 21:33:25.307880 kernel: eth0: renamed from tmpcc034 Jan 13 21:33:25.310581 systemd-networkd[1229]: lxcf241b41c889a: Link UP Jan 13 21:33:25.321890 kernel: eth0: renamed from tmp536db Jan 13 21:33:25.332036 systemd-networkd[1229]: lxc889fcbf5da4d: Gained carrier Jan 13 21:33:25.332785 systemd-networkd[1229]: lxcf241b41c889a: Gained carrier Jan 13 21:33:26.287902 kubelet[2714]: E0113 21:33:26.286936 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:26.416060 systemd-networkd[1229]: lxc_health: Gained IPv6LL Jan 13 21:33:26.607877 kubelet[2714]: E0113 21:33:26.607524 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:26.736006 systemd-networkd[1229]: lxc889fcbf5da4d: Gained IPv6LL Jan 13 21:33:27.040144 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:59646.service - OpenSSH per-connection server daemon (10.0.0.1:59646). Jan 13 21:33:27.072248 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 59646 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:27.073580 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:27.079836 systemd-logind[1522]: New session 8 of user core. Jan 13 21:33:27.090176 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:33:27.228635 sshd[3929]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:27.231994 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:59646.service: Deactivated successfully. Jan 13 21:33:27.235281 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:33:27.235286 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:33:27.236492 systemd-logind[1522]: Removed session 8. Jan 13 21:33:27.248011 systemd-networkd[1229]: lxcf241b41c889a: Gained IPv6LL Jan 13 21:33:28.816512 containerd[1542]: time="2025-01-13T21:33:28.816419829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:28.816512 containerd[1542]: time="2025-01-13T21:33:28.816476269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:28.816512 containerd[1542]: time="2025-01-13T21:33:28.816492109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:28.817261 containerd[1542]: time="2025-01-13T21:33:28.816569069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:28.829523 containerd[1542]: time="2025-01-13T21:33:28.829378973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:28.829523 containerd[1542]: time="2025-01-13T21:33:28.829448013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:28.829523 containerd[1542]: time="2025-01-13T21:33:28.829458973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:28.829677 containerd[1542]: time="2025-01-13T21:33:28.829541853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:28.836508 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:33:28.850815 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:33:28.856612 containerd[1542]: time="2025-01-13T21:33:28.856473383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-774x2,Uid:368875e2-2437-4938-a844-ad4014035b87,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc03477e5573a81f02129efc8f4e29e2c117d3f768931555e6556a10fa3fa1aa\"" Jan 13 21:33:28.857374 kubelet[2714]: E0113 21:33:28.857346 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:28.860732 containerd[1542]: time="2025-01-13T21:33:28.860056549Z" level=info msg="CreateContainer within sandbox \"cc03477e5573a81f02129efc8f4e29e2c117d3f768931555e6556a10fa3fa1aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:33:28.873902 containerd[1542]: time="2025-01-13T21:33:28.873776615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vq776,Uid:7c321008-fe26-4dee-a04c-428562c06d0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"536db42cd9309a56bc693442ee4c055404352c35e7b822ff60f912b43bd858a1\"" Jan 13 21:33:28.874547 kubelet[2714]: E0113 21:33:28.874515 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:28.877569 containerd[1542]: time="2025-01-13T21:33:28.877503102Z" level=info msg="CreateContainer within sandbox \"536db42cd9309a56bc693442ee4c055404352c35e7b822ff60f912b43bd858a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:33:28.887016 containerd[1542]: time="2025-01-13T21:33:28.886976279Z" level=info msg="CreateContainer within sandbox \"cc03477e5573a81f02129efc8f4e29e2c117d3f768931555e6556a10fa3fa1aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f10abe30bf35f58cd53e76d250bd33aa93551c21599ea8bfaf7f80b1dfff5396\"" Jan 13 21:33:28.888120 containerd[1542]: time="2025-01-13T21:33:28.887401200Z" level=info msg="StartContainer for \"f10abe30bf35f58cd53e76d250bd33aa93551c21599ea8bfaf7f80b1dfff5396\"" Jan 13 21:33:28.891381 containerd[1542]: time="2025-01-13T21:33:28.891324647Z" level=info msg="CreateContainer within sandbox \"536db42cd9309a56bc693442ee4c055404352c35e7b822ff60f912b43bd858a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3942dca02fc0073c2697b80a2b01e638327ad4c0393f1cc235abb31fcf1d5c86\"" Jan 13 21:33:28.892900 containerd[1542]: time="2025-01-13T21:33:28.891904328Z" level=info msg="StartContainer for \"3942dca02fc0073c2697b80a2b01e638327ad4c0393f1cc235abb31fcf1d5c86\"" Jan 13 21:33:28.953666 containerd[1542]: time="2025-01-13T21:33:28.953535922Z" level=info msg="StartContainer for \"3942dca02fc0073c2697b80a2b01e638327ad4c0393f1cc235abb31fcf1d5c86\" returns successfully" Jan 13 21:33:28.953666 containerd[1542]: time="2025-01-13T21:33:28.953550522Z" level=info msg="StartContainer for \"f10abe30bf35f58cd53e76d250bd33aa93551c21599ea8bfaf7f80b1dfff5396\" returns successfully" Jan 13 21:33:29.614673 kubelet[2714]: E0113 21:33:29.614394 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:29.617139 kubelet[2714]: E0113 21:33:29.616124 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:29.625812 kubelet[2714]: I0113 21:33:29.625194 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-774x2" podStartSLOduration=19.625142732 podStartE2EDuration="19.625142732s" podCreationTimestamp="2025-01-13 21:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:29.624388291 +0000 UTC m=+34.204655807" watchObservedRunningTime="2025-01-13 21:33:29.625142732 +0000 UTC m=+34.205410248" Jan 13 21:33:29.634662 kubelet[2714]: I0113 21:33:29.634624 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vq776" podStartSLOduration=19.634584188 podStartE2EDuration="19.634584188s" podCreationTimestamp="2025-01-13 21:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:29.634111588 +0000 UTC m=+34.214379104" watchObservedRunningTime="2025-01-13 21:33:29.634584188 +0000 UTC m=+34.214851664" Jan 13 21:33:30.618028 kubelet[2714]: E0113 21:33:30.617982 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:30.618742 kubelet[2714]: E0113 21:33:30.618724 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:31.620251 kubelet[2714]: E0113 21:33:31.619897 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:31.620633 kubelet[2714]: E0113 21:33:31.620450 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:33:32.242114 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:59660.service - OpenSSH per-connection server daemon (10.0.0.1:59660). Jan 13 21:33:32.272163 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 59660 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:32.273878 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:32.278006 systemd-logind[1522]: New session 9 of user core. Jan 13 21:33:32.288179 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:33:32.402076 sshd[4121]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:32.405150 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:59660.service: Deactivated successfully. Jan 13 21:33:32.407032 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:33:32.407149 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:33:32.408731 systemd-logind[1522]: Removed session 9. Jan 13 21:33:37.421123 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:40304.service - OpenSSH per-connection server daemon (10.0.0.1:40304). Jan 13 21:33:37.451104 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 40304 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:37.452502 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:37.456240 systemd-logind[1522]: New session 10 of user core. Jan 13 21:33:37.467096 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:33:37.578163 sshd[4138]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:37.591085 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:40306.service - OpenSSH per-connection server daemon (10.0.0.1:40306). Jan 13 21:33:37.591458 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:40304.service: Deactivated successfully. Jan 13 21:33:37.594406 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:33:37.594497 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:33:37.596848 systemd-logind[1522]: Removed session 10. Jan 13 21:33:37.622416 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 40306 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:37.623843 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:37.627903 systemd-logind[1522]: New session 11 of user core. Jan 13 21:33:37.635128 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:33:37.778394 sshd[4151]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:37.786206 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:40322.service - OpenSSH per-connection server daemon (10.0.0.1:40322). Jan 13 21:33:37.787627 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:40306.service: Deactivated successfully. Jan 13 21:33:37.795242 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:33:37.796149 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:33:37.798576 systemd-logind[1522]: Removed session 11. Jan 13 21:33:37.819243 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:37.820506 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:37.824362 systemd-logind[1522]: New session 12 of user core. Jan 13 21:33:37.833152 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:33:37.944955 sshd[4165]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:37.948343 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:40322.service: Deactivated successfully. Jan 13 21:33:37.950476 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:33:37.951310 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:33:37.952171 systemd-logind[1522]: Removed session 12. Jan 13 21:33:42.956082 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:55866.service - OpenSSH per-connection server daemon (10.0.0.1:55866). Jan 13 21:33:42.984080 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 55866 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:42.985216 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:42.989267 systemd-logind[1522]: New session 13 of user core. Jan 13 21:33:42.997166 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:33:43.110329 sshd[4186]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:43.113656 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:55866.service: Deactivated successfully. Jan 13 21:33:43.116808 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:33:43.117008 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:33:43.118916 systemd-logind[1522]: Removed session 13. Jan 13 21:33:48.125188 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:55868.service - OpenSSH per-connection server daemon (10.0.0.1:55868). Jan 13 21:33:48.184097 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 55868 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:48.185301 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:48.189493 systemd-logind[1522]: New session 14 of user core. Jan 13 21:33:48.199224 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:33:48.306061 sshd[4201]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:48.317238 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:55882.service - OpenSSH per-connection server daemon (10.0.0.1:55882). Jan 13 21:33:48.317605 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:55868.service: Deactivated successfully. Jan 13 21:33:48.323147 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:33:48.324017 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:33:48.325746 systemd-logind[1522]: Removed session 14. Jan 13 21:33:48.347450 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 55882 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:48.348902 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:48.355997 systemd-logind[1522]: New session 15 of user core. Jan 13 21:33:48.365170 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:33:48.567801 sshd[4214]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:48.580095 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:55894.service - OpenSSH per-connection server daemon (10.0.0.1:55894). Jan 13 21:33:48.580565 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:55882.service: Deactivated successfully. Jan 13 21:33:48.582759 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:33:48.583594 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:33:48.585084 systemd-logind[1522]: Removed session 15. Jan 13 21:33:48.622394 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 55894 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:48.623709 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:48.627429 systemd-logind[1522]: New session 16 of user core. Jan 13 21:33:48.638188 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:33:49.864326 sshd[4227]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:49.873338 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:55904.service - OpenSSH per-connection server daemon (10.0.0.1:55904). Jan 13 21:33:49.874581 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:55894.service: Deactivated successfully. Jan 13 21:33:49.880148 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:33:49.882815 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:33:49.891243 systemd-logind[1522]: Removed session 16. Jan 13 21:33:49.909573 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 55904 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:49.911096 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:49.914932 systemd-logind[1522]: New session 17 of user core. Jan 13 21:33:49.924438 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:33:50.153042 sshd[4251]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:50.164336 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:55908.service - OpenSSH per-connection server daemon (10.0.0.1:55908). Jan 13 21:33:50.164737 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:55904.service: Deactivated successfully. Jan 13 21:33:50.178582 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:33:50.179747 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:33:50.180998 systemd-logind[1522]: Removed session 17. Jan 13 21:33:50.206392 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 55908 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:50.208189 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:50.211923 systemd-logind[1522]: New session 18 of user core. Jan 13 21:33:50.219206 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:33:50.331432 sshd[4266]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:50.334670 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:55908.service: Deactivated successfully. Jan 13 21:33:50.336709 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:33:50.336790 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:33:50.338351 systemd-logind[1522]: Removed session 18. Jan 13 21:33:55.344100 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:41342.service - OpenSSH per-connection server daemon (10.0.0.1:41342). Jan 13 21:33:55.374053 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 41342 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:33:55.375126 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:55.382195 systemd-logind[1522]: New session 19 of user core. Jan 13 21:33:55.394116 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:33:55.514584 sshd[4287]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:55.517428 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:41342.service: Deactivated successfully. Jan 13 21:33:55.520644 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:33:55.521061 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:33:55.521892 systemd-logind[1522]: Removed session 19. Jan 13 21:34:00.528083 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:41344.service - OpenSSH per-connection server daemon (10.0.0.1:41344). Jan 13 21:34:00.555798 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 41344 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:34:00.557002 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:00.560326 systemd-logind[1522]: New session 20 of user core. Jan 13 21:34:00.575201 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:34:00.681687 sshd[4305]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:00.685568 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:41344.service: Deactivated successfully. Jan 13 21:34:00.687618 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:34:00.687674 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:34:00.689073 systemd-logind[1522]: Removed session 20. Jan 13 21:34:05.689114 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:52138.service - OpenSSH per-connection server daemon (10.0.0.1:52138). Jan 13 21:34:05.722302 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 52138 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:34:05.723501 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:05.727114 systemd-logind[1522]: New session 21 of user core. Jan 13 21:34:05.739193 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:34:05.842686 sshd[4320]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:05.855201 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:52140.service - OpenSSH per-connection server daemon (10.0.0.1:52140). Jan 13 21:34:05.855572 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:52138.service: Deactivated successfully. Jan 13 21:34:05.858005 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:34:05.858932 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:34:05.859764 systemd-logind[1522]: Removed session 21. Jan 13 21:34:05.882141 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 52140 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:34:05.883238 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:05.886928 systemd-logind[1522]: New session 22 of user core. Jan 13 21:34:05.897105 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:34:07.954043 containerd[1542]: time="2025-01-13T21:34:07.953996991Z" level=info msg="StopContainer for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" with timeout 30 (s)" Jan 13 21:34:07.954711 containerd[1542]: time="2025-01-13T21:34:07.954552081Z" level=info msg="Stop container \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" with signal terminated" Jan 13 21:34:07.987400 containerd[1542]: time="2025-01-13T21:34:07.987360516Z" level=info msg="StopContainer for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" with timeout 2 (s)" Jan 13 21:34:07.987693 containerd[1542]: time="2025-01-13T21:34:07.987613320Z" level=info msg="Stop container \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" with signal terminated" Jan 13 21:34:07.988833 containerd[1542]: time="2025-01-13T21:34:07.988770981Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:34:07.993614 systemd-networkd[1229]: lxc_health: Link DOWN Jan 13 21:34:07.993619 systemd-networkd[1229]: lxc_health: Lost carrier Jan 13 21:34:07.998226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43-rootfs.mount: Deactivated successfully. Jan 13 21:34:08.009948 containerd[1542]: time="2025-01-13T21:34:08.009821399Z" level=info msg="shim disconnected" id=98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43 namespace=k8s.io Jan 13 21:34:08.009948 containerd[1542]: time="2025-01-13T21:34:08.009942201Z" level=warning msg="cleaning up after shim disconnected" id=98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43 namespace=k8s.io Jan 13 21:34:08.009948 containerd[1542]: time="2025-01-13T21:34:08.009954761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:08.033641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393-rootfs.mount: Deactivated successfully. Jan 13 21:34:08.040265 containerd[1542]: time="2025-01-13T21:34:08.040177134Z" level=info msg="shim disconnected" id=4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393 namespace=k8s.io Jan 13 21:34:08.040265 containerd[1542]: time="2025-01-13T21:34:08.040262176Z" level=warning msg="cleaning up after shim disconnected" id=4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393 namespace=k8s.io Jan 13 21:34:08.040265 containerd[1542]: time="2025-01-13T21:34:08.040271656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:08.050204 containerd[1542]: time="2025-01-13T21:34:08.050151590Z" level=info msg="StopContainer for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" returns successfully" Jan 13 21:34:08.050790 containerd[1542]: time="2025-01-13T21:34:08.050761801Z" level=info msg="StopPodSandbox for \"cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2\"" Jan 13 21:34:08.051459 containerd[1542]: time="2025-01-13T21:34:08.051436773Z" level=info msg="Container to stop \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:08.053272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2-shm.mount: Deactivated successfully. Jan 13 21:34:08.054574 containerd[1542]: time="2025-01-13T21:34:08.054543267Z" level=info msg="StopContainer for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" returns successfully" Jan 13 21:34:08.054986 containerd[1542]: time="2025-01-13T21:34:08.054958395Z" level=info msg="StopPodSandbox for \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\"" Jan 13 21:34:08.055035 containerd[1542]: time="2025-01-13T21:34:08.054997595Z" level=info msg="Container to stop \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:08.055035 containerd[1542]: time="2025-01-13T21:34:08.055010276Z" level=info msg="Container to stop \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:08.055035 containerd[1542]: time="2025-01-13T21:34:08.055020796Z" level=info msg="Container to stop \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:08.055035 containerd[1542]: time="2025-01-13T21:34:08.055031036Z" level=info msg="Container to stop \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:08.055143 containerd[1542]: time="2025-01-13T21:34:08.055041196Z" level=info msg="Container to stop \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:08.057400 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015-shm.mount: Deactivated successfully. Jan 13 21:34:08.088052 containerd[1542]: time="2025-01-13T21:34:08.087816494Z" level=info msg="shim disconnected" id=cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2 namespace=k8s.io Jan 13 21:34:08.088052 containerd[1542]: time="2025-01-13T21:34:08.087881615Z" level=warning msg="cleaning up after shim disconnected" id=cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2 namespace=k8s.io Jan 13 21:34:08.088052 containerd[1542]: time="2025-01-13T21:34:08.087889815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:08.088772 containerd[1542]: time="2025-01-13T21:34:08.088724510Z" level=info msg="shim disconnected" id=e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015 namespace=k8s.io Jan 13 21:34:08.088772 containerd[1542]: time="2025-01-13T21:34:08.088768671Z" level=warning msg="cleaning up after shim disconnected" id=e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015 namespace=k8s.io Jan 13 21:34:08.088863 containerd[1542]: time="2025-01-13T21:34:08.088778031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:08.100950 containerd[1542]: time="2025-01-13T21:34:08.100896885Z" level=info msg="TearDown network for sandbox \"cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2\" successfully" Jan 13 21:34:08.100950 containerd[1542]: time="2025-01-13T21:34:08.100936366Z" level=info msg="StopPodSandbox for \"cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2\" returns successfully" Jan 13 21:34:08.102521 containerd[1542]: time="2025-01-13T21:34:08.102455632Z" level=info msg="TearDown network for sandbox \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" successfully" Jan 13 21:34:08.102521 containerd[1542]: time="2025-01-13T21:34:08.102483353Z" level=info msg="StopPodSandbox for \"e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015\" returns successfully" Jan 13 21:34:08.155503 kubelet[2714]: I0113 21:34:08.155455 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-run\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.155503 kubelet[2714]: I0113 21:34:08.155499 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-lib-modules\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.155997 kubelet[2714]: I0113 21:34:08.155531 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-bpf-maps\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.155997 kubelet[2714]: I0113 21:34:08.155562 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-config-path\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.155997 kubelet[2714]: I0113 21:34:08.155584 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snv8b\" (UniqueName: \"kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-kube-api-access-snv8b\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.155997 kubelet[2714]: I0113 21:34:08.155611 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-cilium-config-path\") pod \"07bee16a-37bb-46b8-a66e-6b0ddb10dba8\" (UID: \"07bee16a-37bb-46b8-a66e-6b0ddb10dba8\") " Jan 13 21:34:08.155997 kubelet[2714]: I0113 21:34:08.155632 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj7zt\" (UniqueName: \"kubernetes.io/projected/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-kube-api-access-gj7zt\") pod \"07bee16a-37bb-46b8-a66e-6b0ddb10dba8\" (UID: \"07bee16a-37bb-46b8-a66e-6b0ddb10dba8\") " Jan 13 21:34:08.155997 kubelet[2714]: I0113 21:34:08.155677 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cni-path\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156127 kubelet[2714]: I0113 21:34:08.155695 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-xtables-lock\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156127 kubelet[2714]: I0113 21:34:08.155731 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hostproc\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156127 kubelet[2714]: I0113 21:34:08.155750 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hubble-tls\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156127 kubelet[2714]: I0113 21:34:08.155767 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-kernel\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156127 kubelet[2714]: I0113 21:34:08.155787 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-clustermesh-secrets\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156127 kubelet[2714]: I0113 21:34:08.155806 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-net\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156247 kubelet[2714]: I0113 21:34:08.155824 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-cgroup\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.156247 kubelet[2714]: I0113 21:34:08.155841 2714 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-etc-cni-netd\") pod \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\" (UID: \"ade938a9-bcdd-4d31-8cf7-ce721d91fa37\") " Jan 13 21:34:08.159763 kubelet[2714]: I0113 21:34:08.159714 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cni-path" (OuterVolumeSpecName: "cni-path") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.159819 kubelet[2714]: I0113 21:34:08.159723 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.159819 kubelet[2714]: I0113 21:34:08.159810 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.159880 kubelet[2714]: I0113 21:34:08.159827 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.159880 kubelet[2714]: I0113 21:34:08.159844 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.160182 kubelet[2714]: I0113 21:34:08.159962 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.161342 kubelet[2714]: I0113 21:34:08.161291 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hostproc" (OuterVolumeSpecName: "hostproc") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.161460 kubelet[2714]: I0113 21:34:08.161444 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.161688 kubelet[2714]: I0113 21:34:08.161644 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:34:08.163646 kubelet[2714]: I0113 21:34:08.163344 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07bee16a-37bb-46b8-a66e-6b0ddb10dba8" (UID: "07bee16a-37bb-46b8-a66e-6b0ddb10dba8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:34:08.168080 kubelet[2714]: I0113 21:34:08.168031 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:34:08.168151 kubelet[2714]: I0113 21:34:08.168102 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.168151 kubelet[2714]: I0113 21:34:08.168124 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:08.168245 kubelet[2714]: I0113 21:34:08.168219 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-kube-api-access-snv8b" (OuterVolumeSpecName: "kube-api-access-snv8b") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "kube-api-access-snv8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:34:08.168330 kubelet[2714]: I0113 21:34:08.168267 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-kube-api-access-gj7zt" (OuterVolumeSpecName: "kube-api-access-gj7zt") pod "07bee16a-37bb-46b8-a66e-6b0ddb10dba8" (UID: "07bee16a-37bb-46b8-a66e-6b0ddb10dba8"). InnerVolumeSpecName "kube-api-access-gj7zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:34:08.169118 kubelet[2714]: I0113 21:34:08.169077 2714 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ade938a9-bcdd-4d31-8cf7-ce721d91fa37" (UID: "ade938a9-bcdd-4d31-8cf7-ce721d91fa37"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256441 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256477 2714 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256491 2714 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256503 2714 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-snv8b\" (UniqueName: \"kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-kube-api-access-snv8b\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256515 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256526 2714 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gj7zt\" (UniqueName: \"kubernetes.io/projected/07bee16a-37bb-46b8-a66e-6b0ddb10dba8-kube-api-access-gj7zt\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256536 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256569 kubelet[2714]: I0113 21:34:08.256545 2714 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256554 2714 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256564 2714 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256573 2714 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256584 2714 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256593 2714 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256602 2714 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256613 2714 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.256887 kubelet[2714]: I0113 21:34:08.256623 2714 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ade938a9-bcdd-4d31-8cf7-ce721d91fa37-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 21:34:08.697265 kubelet[2714]: I0113 21:34:08.697231 2714 scope.go:117] "RemoveContainer" containerID="4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393" Jan 13 21:34:08.698610 containerd[1542]: time="2025-01-13T21:34:08.698581185Z" level=info msg="RemoveContainer for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\"" Jan 13 21:34:08.702261 containerd[1542]: time="2025-01-13T21:34:08.702230930Z" level=info msg="RemoveContainer for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" returns successfully" Jan 13 21:34:08.702617 kubelet[2714]: I0113 21:34:08.702591 2714 scope.go:117] "RemoveContainer" containerID="67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b" Jan 13 21:34:08.706324 containerd[1542]: time="2025-01-13T21:34:08.706293761Z" level=info msg="RemoveContainer for \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\"" Jan 13 21:34:08.711748 containerd[1542]: time="2025-01-13T21:34:08.711714497Z" level=info msg="RemoveContainer for \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\" returns successfully" Jan 13 21:34:08.712709 kubelet[2714]: I0113 21:34:08.712332 2714 scope.go:117] "RemoveContainer" containerID="79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32" Jan 13 21:34:08.713889 containerd[1542]: time="2025-01-13T21:34:08.713838054Z" level=info msg="RemoveContainer for \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\"" Jan 13 21:34:08.720322 containerd[1542]: time="2025-01-13T21:34:08.720285168Z" level=info msg="RemoveContainer for \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\" returns successfully" Jan 13 21:34:08.720510 kubelet[2714]: I0113 21:34:08.720463 2714 scope.go:117] "RemoveContainer" containerID="78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591" Jan 13 21:34:08.721526 containerd[1542]: time="2025-01-13T21:34:08.721498109Z" level=info msg="RemoveContainer for \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\"" Jan 13 21:34:08.723695 containerd[1542]: time="2025-01-13T21:34:08.723666068Z" level=info msg="RemoveContainer for \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\" returns successfully" Jan 13 21:34:08.723975 kubelet[2714]: I0113 21:34:08.723887 2714 scope.go:117] "RemoveContainer" containerID="1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2" Jan 13 21:34:08.725121 containerd[1542]: time="2025-01-13T21:34:08.725070412Z" level=info msg="RemoveContainer for \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\"" Jan 13 21:34:08.727037 containerd[1542]: time="2025-01-13T21:34:08.727002566Z" level=info msg="RemoveContainer for \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\" returns successfully" Jan 13 21:34:08.727197 kubelet[2714]: I0113 21:34:08.727181 2714 scope.go:117] "RemoveContainer" containerID="4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393" Jan 13 21:34:08.727434 containerd[1542]: time="2025-01-13T21:34:08.727336932Z" level=error msg="ContainerStatus for \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\": not found" Jan 13 21:34:08.736837 kubelet[2714]: E0113 21:34:08.736749 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\": not found" containerID="4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393" Jan 13 21:34:08.740072 kubelet[2714]: I0113 21:34:08.739955 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393"} err="failed to get container status \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a0b4f7efd4ba7e3adc39b932abd541be1328b58413a9e79c2c8ed6aeb94b393\": not found" Jan 13 21:34:08.740072 kubelet[2714]: I0113 21:34:08.739988 2714 scope.go:117] "RemoveContainer" containerID="67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b" Jan 13 21:34:08.740342 containerd[1542]: time="2025-01-13T21:34:08.740309041Z" level=error msg="ContainerStatus for \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\": not found" Jan 13 21:34:08.740489 kubelet[2714]: E0113 21:34:08.740468 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\": not found" containerID="67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b" Jan 13 21:34:08.740540 kubelet[2714]: I0113 21:34:08.740527 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b"} err="failed to get container status \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\": rpc error: code = NotFound desc = an error occurred when try to find container \"67a35f378a15f331453565daba50c79604697cc0f468401cb68a2713fd4d848b\": not found" Jan 13 21:34:08.740579 kubelet[2714]: I0113 21:34:08.740543 2714 scope.go:117] "RemoveContainer" containerID="79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32" Jan 13 21:34:08.740743 containerd[1542]: time="2025-01-13T21:34:08.740710648Z" level=error msg="ContainerStatus for \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\": not found" Jan 13 21:34:08.740862 kubelet[2714]: E0113 21:34:08.740837 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\": not found" containerID="79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32" Jan 13 21:34:08.740897 kubelet[2714]: I0113 21:34:08.740888 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32"} err="failed to get container status \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\": rpc error: code = NotFound desc = an error occurred when try to find container \"79110b5d5ed776856346a76746e20e64d0d88f761f8e5b59e177d1fd44382d32\": not found" Jan 13 21:34:08.740937 kubelet[2714]: I0113 21:34:08.740899 2714 scope.go:117] "RemoveContainer" containerID="78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591" Jan 13 21:34:08.741077 containerd[1542]: time="2025-01-13T21:34:08.741034094Z" level=error msg="ContainerStatus for \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\": not found" Jan 13 21:34:08.741156 kubelet[2714]: E0113 21:34:08.741138 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\": not found" containerID="78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591" Jan 13 21:34:08.741182 kubelet[2714]: I0113 21:34:08.741168 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591"} err="failed to get container status \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\": rpc error: code = NotFound desc = an error occurred when try to find container \"78f8e1c3dba843cf63ebbd02b783682d4655b9a46925c082b596fbe68d8fc591\": not found" Jan 13 21:34:08.741182 kubelet[2714]: I0113 21:34:08.741177 2714 scope.go:117] "RemoveContainer" containerID="1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2" Jan 13 21:34:08.741324 containerd[1542]: time="2025-01-13T21:34:08.741301899Z" level=error msg="ContainerStatus for \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\": not found" Jan 13 21:34:08.741553 kubelet[2714]: E0113 21:34:08.741442 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\": not found" containerID="1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2" Jan 13 21:34:08.741553 kubelet[2714]: I0113 21:34:08.741475 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2"} err="failed to get container status \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1828fa6a9da27a542aaf629682e971aa1fd1444008780f8818023970fa54f1e2\": not found" Jan 13 21:34:08.741553 kubelet[2714]: I0113 21:34:08.741485 2714 scope.go:117] "RemoveContainer" containerID="98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43" Jan 13 21:34:08.742283 containerd[1542]: time="2025-01-13T21:34:08.742238275Z" level=info msg="RemoveContainer for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\"" Jan 13 21:34:08.744579 containerd[1542]: time="2025-01-13T21:34:08.744516435Z" level=info msg="RemoveContainer for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" returns successfully" Jan 13 21:34:08.744798 kubelet[2714]: I0113 21:34:08.744726 2714 scope.go:117] "RemoveContainer" containerID="98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43" Jan 13 21:34:08.744984 containerd[1542]: time="2025-01-13T21:34:08.744939243Z" level=error msg="ContainerStatus for \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\": not found" Jan 13 21:34:08.745150 kubelet[2714]: E0113 21:34:08.745133 2714 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\": not found" containerID="98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43" Jan 13 21:34:08.745195 kubelet[2714]: I0113 21:34:08.745165 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43"} err="failed to get container status \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\": rpc error: code = NotFound desc = an error occurred when try to find container \"98e3d4d97b949babed71a1891f8fcce298194abe9c354babfec214c672c25f43\": not found" Jan 13 21:34:08.967500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfc55333e23ff1d40e32e2e08964d18dc743db8b811d4d74f3a5bdad6cac52f2-rootfs.mount: Deactivated successfully. Jan 13 21:34:08.967671 systemd[1]: var-lib-kubelet-pods-07bee16a\x2d37bb\x2d46b8\x2da66e\x2d6b0ddb10dba8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgj7zt.mount: Deactivated successfully. Jan 13 21:34:08.967764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e788eadb04a644fe921735136b89f900929c4ff5ab63d0bd603b0167b32f1015-rootfs.mount: Deactivated successfully. Jan 13 21:34:08.967837 systemd[1]: var-lib-kubelet-pods-ade938a9\x2dbcdd\x2d4d31\x2d8cf7\x2dce721d91fa37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnv8b.mount: Deactivated successfully. Jan 13 21:34:08.967947 systemd[1]: var-lib-kubelet-pods-ade938a9\x2dbcdd\x2d4d31\x2d8cf7\x2dce721d91fa37-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:34:08.968029 systemd[1]: var-lib-kubelet-pods-ade938a9\x2dbcdd\x2d4d31\x2d8cf7\x2dce721d91fa37-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:34:09.503395 kubelet[2714]: I0113 21:34:09.502559 2714 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="07bee16a-37bb-46b8-a66e-6b0ddb10dba8" path="/var/lib/kubelet/pods/07bee16a-37bb-46b8-a66e-6b0ddb10dba8/volumes" Jan 13 21:34:09.503395 kubelet[2714]: I0113 21:34:09.502982 2714 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" path="/var/lib/kubelet/pods/ade938a9-bcdd-4d31-8cf7-ce721d91fa37/volumes" Jan 13 21:34:09.910755 sshd[4333]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:09.917085 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:52154.service - OpenSSH per-connection server daemon (10.0.0.1:52154). Jan 13 21:34:09.917454 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:52140.service: Deactivated successfully. Jan 13 21:34:09.920339 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:34:09.921218 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:34:09.922555 systemd-logind[1522]: Removed session 22. Jan 13 21:34:09.945646 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 52154 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:34:09.947043 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:09.951786 systemd-logind[1522]: New session 23 of user core. Jan 13 21:34:09.961162 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:34:10.565432 kubelet[2714]: E0113 21:34:10.565401 2714 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:34:10.703525 sshd[4501]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:10.718601 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:52168.service - OpenSSH per-connection server daemon (10.0.0.1:52168). Jan 13 21:34:10.719137 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:52154.service: Deactivated successfully. Jan 13 21:34:10.723892 kubelet[2714]: I0113 21:34:10.723199 2714 topology_manager.go:215] "Topology Admit Handler" podUID="fb4c8ed9-0e23-44eb-8fcd-0612f1043898" podNamespace="kube-system" podName="cilium-lvjsr" Jan 13 21:34:10.723892 kubelet[2714]: E0113 21:34:10.723257 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" containerName="mount-cgroup" Jan 13 21:34:10.723892 kubelet[2714]: E0113 21:34:10.723268 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" containerName="apply-sysctl-overwrites" Jan 13 21:34:10.723892 kubelet[2714]: E0113 21:34:10.723276 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" containerName="mount-bpf-fs" Jan 13 21:34:10.723892 kubelet[2714]: E0113 21:34:10.723283 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" containerName="cilium-agent" Jan 13 21:34:10.723892 kubelet[2714]: E0113 21:34:10.723290 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07bee16a-37bb-46b8-a66e-6b0ddb10dba8" containerName="cilium-operator" Jan 13 21:34:10.723892 kubelet[2714]: E0113 21:34:10.723298 2714 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" containerName="clean-cilium-state" Jan 13 21:34:10.727288 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:34:10.728953 kubelet[2714]: I0113 21:34:10.728109 2714 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bee16a-37bb-46b8-a66e-6b0ddb10dba8" containerName="cilium-operator" Jan 13 21:34:10.728953 kubelet[2714]: I0113 21:34:10.728150 2714 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade938a9-bcdd-4d31-8cf7-ce721d91fa37" containerName="cilium-agent" Jan 13 21:34:10.730438 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:34:10.732375 systemd-logind[1522]: Removed session 23. Jan 13 21:34:10.769887 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 52168 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:34:10.770508 kubelet[2714]: I0113 21:34:10.770464 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7s4\" (UniqueName: \"kubernetes.io/projected/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-kube-api-access-df7s4\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770576 kubelet[2714]: I0113 21:34:10.770530 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-hubble-tls\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770576 kubelet[2714]: I0113 21:34:10.770552 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-cni-path\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770576 kubelet[2714]: I0113 21:34:10.770574 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-clustermesh-secrets\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770649 kubelet[2714]: I0113 21:34:10.770593 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-cilium-run\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770649 kubelet[2714]: I0113 21:34:10.770615 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-host-proc-sys-net\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770649 kubelet[2714]: I0113 21:34:10.770635 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-cilium-cgroup\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770731 kubelet[2714]: I0113 21:34:10.770653 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-cilium-config-path\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770731 kubelet[2714]: I0113 21:34:10.770673 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-lib-modules\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770731 kubelet[2714]: I0113 21:34:10.770692 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-xtables-lock\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770731 kubelet[2714]: I0113 21:34:10.770710 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-host-proc-sys-kernel\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770818 kubelet[2714]: I0113 21:34:10.770775 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-hostproc\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.770846 kubelet[2714]: I0113 21:34:10.770821 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-cilium-ipsec-secrets\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.771531 kubelet[2714]: I0113 21:34:10.770851 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-bpf-maps\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.771531 kubelet[2714]: I0113 21:34:10.770947 2714 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb4c8ed9-0e23-44eb-8fcd-0612f1043898-etc-cni-netd\") pod \"cilium-lvjsr\" (UID: \"fb4c8ed9-0e23-44eb-8fcd-0612f1043898\") " pod="kube-system/cilium-lvjsr" Jan 13 21:34:10.771403 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:10.776059 systemd-logind[1522]: New session 24 of user core. Jan 13 21:34:10.787101 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:34:10.837407 sshd[4518]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:10.845067 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:52176.service - OpenSSH per-connection server daemon (10.0.0.1:52176). Jan 13 21:34:10.845446 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:52168.service: Deactivated successfully. Jan 13 21:34:10.847647 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:34:10.848532 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:34:10.850386 systemd-logind[1522]: Removed session 24. Jan 13 21:34:10.880328 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 52176 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:34:10.882438 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:10.894498 systemd-logind[1522]: New session 25 of user core. Jan 13 21:34:10.909166 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:34:11.032426 kubelet[2714]: E0113 21:34:11.032380 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:11.032918 containerd[1542]: time="2025-01-13T21:34:11.032849831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvjsr,Uid:fb4c8ed9-0e23-44eb-8fcd-0612f1043898,Namespace:kube-system,Attempt:0,}" Jan 13 21:34:11.051216 containerd[1542]: time="2025-01-13T21:34:11.051009287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:34:11.051216 containerd[1542]: time="2025-01-13T21:34:11.051062048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:34:11.051216 containerd[1542]: time="2025-01-13T21:34:11.051074168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:34:11.051382 containerd[1542]: time="2025-01-13T21:34:11.051174129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:34:11.081625 containerd[1542]: time="2025-01-13T21:34:11.081575584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvjsr,Uid:fb4c8ed9-0e23-44eb-8fcd-0612f1043898,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\"" Jan 13 21:34:11.082256 kubelet[2714]: E0113 21:34:11.082239 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:11.086022 containerd[1542]: time="2025-01-13T21:34:11.084505352Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:34:11.138418 containerd[1542]: time="2025-01-13T21:34:11.138264346Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7899879bda3fec5b2e15cd56e3c6aec1c6a315a006c5061bea64e3a16ee1f29a\"" Jan 13 21:34:11.140309 containerd[1542]: time="2025-01-13T21:34:11.140178257Z" level=info msg="StartContainer for \"7899879bda3fec5b2e15cd56e3c6aec1c6a315a006c5061bea64e3a16ee1f29a\"" Jan 13 21:34:11.187478 containerd[1542]: time="2025-01-13T21:34:11.187436346Z" level=info msg="StartContainer for \"7899879bda3fec5b2e15cd56e3c6aec1c6a315a006c5061bea64e3a16ee1f29a\" returns successfully" Jan 13 21:34:11.233407 containerd[1542]: time="2025-01-13T21:34:11.233341893Z" level=info msg="shim disconnected" id=7899879bda3fec5b2e15cd56e3c6aec1c6a315a006c5061bea64e3a16ee1f29a namespace=k8s.io Jan 13 21:34:11.233407 containerd[1542]: time="2025-01-13T21:34:11.233399694Z" level=warning msg="cleaning up after shim disconnected" id=7899879bda3fec5b2e15cd56e3c6aec1c6a315a006c5061bea64e3a16ee1f29a namespace=k8s.io Jan 13 21:34:11.233407 containerd[1542]: time="2025-01-13T21:34:11.233415054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:11.700861 kubelet[2714]: E0113 21:34:11.700803 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:11.705784 containerd[1542]: time="2025-01-13T21:34:11.705623336Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:34:11.721355 containerd[1542]: time="2025-01-13T21:34:11.721306071Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f58aff0570f93ba162f865eb4f76c999e09a0892eae492b41031811ea87dfa1\"" Jan 13 21:34:11.721979 containerd[1542]: time="2025-01-13T21:34:11.721953442Z" level=info msg="StartContainer for \"2f58aff0570f93ba162f865eb4f76c999e09a0892eae492b41031811ea87dfa1\"" Jan 13 21:34:11.772682 containerd[1542]: time="2025-01-13T21:34:11.772644146Z" level=info msg="StartContainer for \"2f58aff0570f93ba162f865eb4f76c999e09a0892eae492b41031811ea87dfa1\" returns successfully" Jan 13 21:34:11.803102 containerd[1542]: time="2025-01-13T21:34:11.802895598Z" level=info msg="shim disconnected" id=2f58aff0570f93ba162f865eb4f76c999e09a0892eae492b41031811ea87dfa1 namespace=k8s.io Jan 13 21:34:11.803102 containerd[1542]: time="2025-01-13T21:34:11.802957919Z" level=warning msg="cleaning up after shim disconnected" id=2f58aff0570f93ba162f865eb4f76c999e09a0892eae492b41031811ea87dfa1 namespace=k8s.io Jan 13 21:34:11.803102 containerd[1542]: time="2025-01-13T21:34:11.802968639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:12.704385 kubelet[2714]: E0113 21:34:12.704151 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:12.708194 containerd[1542]: time="2025-01-13T21:34:12.708135102Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:34:12.722394 containerd[1542]: time="2025-01-13T21:34:12.722354887Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc8b09e51f7a46824468bac5a5d28be372f4271a5f8c0c132ee3445b5b9da926\"" Jan 13 21:34:12.722801 containerd[1542]: time="2025-01-13T21:34:12.722782574Z" level=info msg="StartContainer for \"bc8b09e51f7a46824468bac5a5d28be372f4271a5f8c0c132ee3445b5b9da926\"" Jan 13 21:34:12.765432 containerd[1542]: time="2025-01-13T21:34:12.765393729Z" level=info msg="StartContainer for \"bc8b09e51f7a46824468bac5a5d28be372f4271a5f8c0c132ee3445b5b9da926\" returns successfully" Jan 13 21:34:12.785545 containerd[1542]: time="2025-01-13T21:34:12.785490407Z" level=info msg="shim disconnected" id=bc8b09e51f7a46824468bac5a5d28be372f4271a5f8c0c132ee3445b5b9da926 namespace=k8s.io Jan 13 21:34:12.785715 containerd[1542]: time="2025-01-13T21:34:12.785543728Z" level=warning msg="cleaning up after shim disconnected" id=bc8b09e51f7a46824468bac5a5d28be372f4271a5f8c0c132ee3445b5b9da926 namespace=k8s.io Jan 13 21:34:12.785715 containerd[1542]: time="2025-01-13T21:34:12.785580129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:12.877084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc8b09e51f7a46824468bac5a5d28be372f4271a5f8c0c132ee3445b5b9da926-rootfs.mount: Deactivated successfully. Jan 13 21:34:13.707821 kubelet[2714]: E0113 21:34:13.707794 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:13.711342 containerd[1542]: time="2025-01-13T21:34:13.710882412Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:34:13.750365 containerd[1542]: time="2025-01-13T21:34:13.750309140Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73470315547881655502d5ac563b56a24205594db73b5b60a8b7986e5d39adaa\"" Jan 13 21:34:13.753912 containerd[1542]: time="2025-01-13T21:34:13.752028046Z" level=info msg="StartContainer for \"73470315547881655502d5ac563b56a24205594db73b5b60a8b7986e5d39adaa\"" Jan 13 21:34:13.797160 containerd[1542]: time="2025-01-13T21:34:13.797115302Z" level=info msg="StartContainer for \"73470315547881655502d5ac563b56a24205594db73b5b60a8b7986e5d39adaa\" returns successfully" Jan 13 21:34:13.813551 containerd[1542]: time="2025-01-13T21:34:13.813498114Z" level=info msg="shim disconnected" id=73470315547881655502d5ac563b56a24205594db73b5b60a8b7986e5d39adaa namespace=k8s.io Jan 13 21:34:13.813737 containerd[1542]: time="2025-01-13T21:34:13.813721518Z" level=warning msg="cleaning up after shim disconnected" id=73470315547881655502d5ac563b56a24205594db73b5b60a8b7986e5d39adaa namespace=k8s.io Jan 13 21:34:13.813788 containerd[1542]: time="2025-01-13T21:34:13.813776799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:13.877252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73470315547881655502d5ac563b56a24205594db73b5b60a8b7986e5d39adaa-rootfs.mount: Deactivated successfully. Jan 13 21:34:14.712246 kubelet[2714]: E0113 21:34:14.712182 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:14.715997 containerd[1542]: time="2025-01-13T21:34:14.715946789Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:34:14.725728 containerd[1542]: time="2025-01-13T21:34:14.725628254Z" level=info msg="CreateContainer within sandbox \"5fc96de07c4e3798c378a62cd9a6754674f97b60ca0f1028f6e0fa4c431d66c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d27edbf0f199f72e266ea62b174492275b36702e30662d3e9805a5a29a076492\"" Jan 13 21:34:14.727844 containerd[1542]: time="2025-01-13T21:34:14.727813847Z" level=info msg="StartContainer for \"d27edbf0f199f72e266ea62b174492275b36702e30662d3e9805a5a29a076492\"" Jan 13 21:34:14.777177 containerd[1542]: time="2025-01-13T21:34:14.777059907Z" level=info msg="StartContainer for \"d27edbf0f199f72e266ea62b174492275b36702e30662d3e9805a5a29a076492\" returns successfully" Jan 13 21:34:15.035883 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:34:15.717838 kubelet[2714]: E0113 21:34:15.717811 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:15.734768 kubelet[2714]: I0113 21:34:15.734230 2714 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lvjsr" podStartSLOduration=5.734195643 podStartE2EDuration="5.734195643s" podCreationTimestamp="2025-01-13 21:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:34:15.733621794 +0000 UTC m=+80.313889310" watchObservedRunningTime="2025-01-13 21:34:15.734195643 +0000 UTC m=+80.314463119" Jan 13 21:34:17.035042 kubelet[2714]: E0113 21:34:17.035005 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:17.769907 systemd-networkd[1229]: lxc_health: Link UP Jan 13 21:34:17.775815 systemd-networkd[1229]: lxc_health: Gained carrier Jan 13 21:34:19.036204 kubelet[2714]: E0113 21:34:19.034097 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:19.216018 systemd-networkd[1229]: lxc_health: Gained IPv6LL Jan 13 21:34:19.501616 kubelet[2714]: E0113 21:34:19.501263 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:19.725423 kubelet[2714]: E0113 21:34:19.725257 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:20.726965 kubelet[2714]: E0113 21:34:20.726715 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:22.501298 kubelet[2714]: E0113 21:34:22.501259 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:23.501230 kubelet[2714]: E0113 21:34:23.501138 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:34:23.663928 sshd[4527]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:23.667433 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:52176.service: Deactivated successfully. Jan 13 21:34:23.669994 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:34:23.670796 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:34:23.672293 systemd-logind[1522]: Removed session 25.