Jan 29 12:14:24.897504 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 12:14:24.897525 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 12:14:24.897535 kernel: KASLR enabled Jan 29 12:14:24.897541 kernel: efi: EFI v2.7 by EDK II Jan 29 12:14:24.897547 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 29 12:14:24.897552 kernel: random: crng init done Jan 29 12:14:24.897560 kernel: ACPI: Early table checksum verification disabled Jan 29 12:14:24.897566 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 29 12:14:24.897572 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 12:14:24.897579 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897586 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897592 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897598 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897604 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897612 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897619 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897626 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897632 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:14:24.897639 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 12:14:24.897645 kernel: NUMA: Failed to initialise from firmware Jan 29 12:14:24.897652 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:14:24.897658 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 12:14:24.897665 kernel: Zone ranges: Jan 29 12:14:24.897671 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:14:24.897678 kernel: DMA32 empty Jan 29 12:14:24.897685 kernel: Normal empty Jan 29 12:14:24.897692 kernel: Movable zone start for each node Jan 29 12:14:24.897698 kernel: Early memory node ranges Jan 29 12:14:24.897704 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 12:14:24.897711 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 12:14:24.897717 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 12:14:24.897724 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 12:14:24.897730 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 12:14:24.897737 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 12:14:24.897743 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 12:14:24.897750 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:14:24.897756 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 12:14:24.897764 kernel: psci: probing for conduit method from ACPI. Jan 29 12:14:24.897770 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 12:14:24.897777 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 12:14:24.897786 kernel: psci: Trusted OS migration not required Jan 29 12:14:24.897793 kernel: psci: SMC Calling Convention v1.1 Jan 29 12:14:24.897800 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 12:14:24.897808 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 12:14:24.897815 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 12:14:24.897822 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 12:14:24.897829 kernel: Detected PIPT I-cache on CPU0 Jan 29 12:14:24.897836 kernel: CPU features: detected: GIC system register CPU interface Jan 29 12:14:24.897843 kernel: CPU features: detected: Hardware dirty bit management Jan 29 12:14:24.897850 kernel: CPU features: detected: Spectre-v4 Jan 29 12:14:24.897856 kernel: CPU features: detected: Spectre-BHB Jan 29 12:14:24.897863 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 12:14:24.897870 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 12:14:24.897878 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 12:14:24.897885 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 12:14:24.897892 kernel: alternatives: applying boot alternatives Jan 29 12:14:24.897900 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:14:24.897907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:14:24.897914 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:14:24.897921 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:14:24.897928 kernel: Fallback order for Node 0: 0 Jan 29 12:14:24.897935 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 12:14:24.897942 kernel: Policy zone: DMA Jan 29 12:14:24.897949 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:14:24.897957 kernel: software IO TLB: area num 4. Jan 29 12:14:24.897964 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 12:14:24.897984 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 29 12:14:24.897991 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 12:14:24.897998 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:14:24.898006 kernel: rcu: RCU event tracing is enabled. Jan 29 12:14:24.898014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 12:14:24.898021 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:14:24.898028 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:14:24.898035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:14:24.898042 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 12:14:24.898049 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 12:14:24.898057 kernel: GICv3: 256 SPIs implemented Jan 29 12:14:24.898064 kernel: GICv3: 0 Extended SPIs implemented Jan 29 12:14:24.898071 kernel: Root IRQ handler: gic_handle_irq Jan 29 12:14:24.898087 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 12:14:24.898094 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 12:14:24.898101 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 12:14:24.898108 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 12:14:24.898115 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 12:14:24.898123 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 12:14:24.898130 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 12:14:24.898137 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:14:24.898146 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:14:24.898153 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 12:14:24.898160 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 12:14:24.898167 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 12:14:24.898174 kernel: arm-pv: using stolen time PV Jan 29 12:14:24.898181 kernel: Console: colour dummy device 80x25 Jan 29 12:14:24.898188 kernel: ACPI: Core revision 20230628 Jan 29 12:14:24.898196 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 12:14:24.898203 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:14:24.898210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:14:24.898218 kernel: landlock: Up and running. Jan 29 12:14:24.898225 kernel: SELinux: Initializing. Jan 29 12:14:24.898232 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:14:24.898240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:14:24.898247 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:14:24.898254 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:14:24.898261 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:14:24.898273 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:14:24.898281 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 12:14:24.898289 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 12:14:24.898296 kernel: Remapping and enabling EFI services. Jan 29 12:14:24.898303 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:14:24.898310 kernel: Detected PIPT I-cache on CPU1 Jan 29 12:14:24.898317 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 12:14:24.898325 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 12:14:24.898332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:14:24.898339 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 12:14:24.898346 kernel: Detected PIPT I-cache on CPU2 Jan 29 12:14:24.898353 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 12:14:24.898361 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 12:14:24.898369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:14:24.898380 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 12:14:24.898389 kernel: Detected PIPT I-cache on CPU3 Jan 29 12:14:24.898396 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 12:14:24.898404 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 12:14:24.898411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:14:24.898418 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 12:14:24.898426 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 12:14:24.898434 kernel: SMP: Total of 4 processors activated. Jan 29 12:14:24.898442 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 12:14:24.898449 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 12:14:24.898457 kernel: CPU features: detected: Common not Private translations Jan 29 12:14:24.898464 kernel: CPU features: detected: CRC32 instructions Jan 29 12:14:24.898472 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 12:14:24.898479 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 12:14:24.898487 kernel: CPU features: detected: LSE atomic instructions Jan 29 12:14:24.898495 kernel: CPU features: detected: Privileged Access Never Jan 29 12:14:24.898503 kernel: CPU features: detected: RAS Extension Support Jan 29 12:14:24.898510 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 12:14:24.898517 kernel: CPU: All CPU(s) started at EL1 Jan 29 12:14:24.898525 kernel: alternatives: applying system-wide alternatives Jan 29 12:14:24.898532 kernel: devtmpfs: initialized Jan 29 12:14:24.898540 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:14:24.898547 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 12:14:24.898554 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:14:24.898563 kernel: SMBIOS 3.0.0 present. Jan 29 12:14:24.898571 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 29 12:14:24.898578 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:14:24.898585 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 12:14:24.898593 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 12:14:24.898600 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 12:14:24.898608 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:14:24.898615 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 29 12:14:24.898622 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:14:24.898631 kernel: cpuidle: using governor menu Jan 29 12:14:24.898639 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 12:14:24.898646 kernel: ASID allocator initialised with 32768 entries Jan 29 12:14:24.898654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:14:24.898661 kernel: Serial: AMBA PL011 UART driver Jan 29 12:14:24.898669 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 12:14:24.898677 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 12:14:24.898684 kernel: Modules: 509040 pages in range for PLT usage Jan 29 12:14:24.898692 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:14:24.898701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:14:24.898709 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 12:14:24.898716 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 12:14:24.898724 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:14:24.898732 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:14:24.898739 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 12:14:24.898747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 12:14:24.898754 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:14:24.898762 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:14:24.898771 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:14:24.898782 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:14:24.898794 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:14:24.898801 kernel: ACPI: Interpreter enabled Jan 29 12:14:24.898809 kernel: ACPI: Using GIC for interrupt routing Jan 29 12:14:24.898816 kernel: ACPI: MCFG table detected, 1 entries Jan 29 12:14:24.898823 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 12:14:24.898831 kernel: printk: console [ttyAMA0] enabled Jan 29 12:14:24.898839 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:14:24.898979 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:14:24.899066 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 12:14:24.899163 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 12:14:24.899249 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 12:14:24.899358 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 12:14:24.899369 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 12:14:24.899377 kernel: PCI host bridge to bus 0000:00 Jan 29 12:14:24.899458 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 12:14:24.899522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 12:14:24.899584 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 12:14:24.899646 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:14:24.899735 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 12:14:24.899815 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:14:24.899890 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 12:14:24.899960 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 12:14:24.900030 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:14:24.900118 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:14:24.900190 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 12:14:24.900260 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 12:14:24.900335 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 12:14:24.900398 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 12:14:24.900464 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 12:14:24.900474 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 12:14:24.900482 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 12:14:24.900490 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 12:14:24.900497 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 12:14:24.900505 kernel: iommu: Default domain type: Translated Jan 29 12:14:24.900512 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 12:14:24.900520 kernel: efivars: Registered efivars operations Jan 29 12:14:24.900529 kernel: vgaarb: loaded Jan 29 12:14:24.900537 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 12:14:24.900545 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:14:24.900552 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:14:24.900560 kernel: pnp: PnP ACPI init Jan 29 12:14:24.900635 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 12:14:24.900646 kernel: pnp: PnP ACPI: found 1 devices Jan 29 12:14:24.900654 kernel: NET: Registered PF_INET protocol family Jan 29 12:14:24.900663 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:14:24.900672 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:14:24.900679 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:14:24.900687 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:14:24.900695 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:14:24.900702 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:14:24.900710 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:14:24.900717 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:14:24.900725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:14:24.900734 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:14:24.900742 kernel: kvm [1]: HYP mode not available Jan 29 12:14:24.900750 kernel: Initialise system trusted keyrings Jan 29 12:14:24.900757 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:14:24.900765 kernel: Key type asymmetric registered Jan 29 12:14:24.900772 kernel: Asymmetric key parser 'x509' registered Jan 29 12:14:24.900780 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 12:14:24.900788 kernel: io scheduler mq-deadline registered Jan 29 12:14:24.900795 kernel: io scheduler kyber registered Jan 29 12:14:24.900805 kernel: io scheduler bfq registered Jan 29 12:14:24.900812 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 12:14:24.900820 kernel: ACPI: button: Power Button [PWRB] Jan 29 12:14:24.900828 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 12:14:24.900898 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 12:14:24.900908 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:14:24.900916 kernel: thunder_xcv, ver 1.0 Jan 29 12:14:24.900923 kernel: thunder_bgx, ver 1.0 Jan 29 12:14:24.900931 kernel: nicpf, ver 1.0 Jan 29 12:14:24.900941 kernel: nicvf, ver 1.0 Jan 29 12:14:24.901021 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 12:14:24.901100 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T12:14:24 UTC (1738152864) Jan 29 12:14:24.901111 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:14:24.901119 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 12:14:24.901126 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 12:14:24.901134 kernel: watchdog: Hard watchdog permanently disabled Jan 29 12:14:24.901142 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:14:24.901152 kernel: Segment Routing with IPv6 Jan 29 12:14:24.901160 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:14:24.901167 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:14:24.901175 kernel: Key type dns_resolver registered Jan 29 12:14:24.901182 kernel: registered taskstats version 1 Jan 29 12:14:24.901190 kernel: Loading compiled-in X.509 certificates Jan 29 12:14:24.901198 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 12:14:24.901205 kernel: Key type .fscrypt registered Jan 29 12:14:24.901213 kernel: Key type fscrypt-provisioning registered Jan 29 12:14:24.901238 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:14:24.901246 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:14:24.901253 kernel: ima: No architecture policies found Jan 29 12:14:24.901261 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 12:14:24.901274 kernel: clk: Disabling unused clocks Jan 29 12:14:24.901282 kernel: Freeing unused kernel memory: 39360K Jan 29 12:14:24.901290 kernel: Run /init as init process Jan 29 12:14:24.901297 kernel: with arguments: Jan 29 12:14:24.901304 kernel: /init Jan 29 12:14:24.901314 kernel: with environment: Jan 29 12:14:24.901321 kernel: HOME=/ Jan 29 12:14:24.901328 kernel: TERM=linux Jan 29 12:14:24.901336 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:14:24.901346 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:14:24.901355 systemd[1]: Detected virtualization kvm. Jan 29 12:14:24.901364 systemd[1]: Detected architecture arm64. Jan 29 12:14:24.901372 systemd[1]: Running in initrd. Jan 29 12:14:24.901381 systemd[1]: No hostname configured, using default hostname. Jan 29 12:14:24.901389 systemd[1]: Hostname set to . Jan 29 12:14:24.901398 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:14:24.901406 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:14:24.901414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:14:24.901422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:14:24.901431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:14:24.901440 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:14:24.901450 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:14:24.901458 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:14:24.901468 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:14:24.901486 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:14:24.901495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:14:24.901503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:14:24.901513 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:14:24.901521 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:14:24.901530 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:14:24.901538 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:14:24.901546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:14:24.901555 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:14:24.901563 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:14:24.901571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:14:24.901580 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:14:24.901590 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:14:24.901598 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:14:24.901606 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:14:24.901615 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:14:24.901623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:14:24.901631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:14:24.901639 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:14:24.901647 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:14:24.901656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:14:24.901665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:14:24.901674 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:14:24.901682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:14:24.901690 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:14:24.901716 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 12:14:24.901737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:14:24.901745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:14:24.901754 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:14:24.901764 systemd-journald[237]: Journal started Jan 29 12:14:24.901783 systemd-journald[237]: Runtime Journal (/run/log/journal/c878ba0a1ddd414aa965b04d095fd0c8) is 5.9M, max 47.3M, 41.4M free. Jan 29 12:14:24.889351 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 12:14:24.905164 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:14:24.906836 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 12:14:24.908593 kernel: Bridge firewalling registered Jan 29 12:14:24.908611 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:14:24.909828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:14:24.927219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:14:24.929001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:14:24.930925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:14:24.934270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:14:24.941658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:14:24.944482 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:14:24.946426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:14:24.964250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:14:24.965437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:14:24.968197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:14:24.981287 dracut-cmdline[280]: dracut-dracut-053 Jan 29 12:14:24.983728 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:14:24.994165 systemd-resolved[277]: Positive Trust Anchors: Jan 29 12:14:24.994183 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:14:24.994215 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:14:24.998906 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 29 12:14:24.999842 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:14:25.004142 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:14:25.056090 kernel: SCSI subsystem initialized Jan 29 12:14:25.060108 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:14:25.067111 kernel: iscsi: registered transport (tcp) Jan 29 12:14:25.082107 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:14:25.082128 kernel: QLogic iSCSI HBA Driver Jan 29 12:14:25.125166 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:14:25.138207 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:14:25.154514 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:14:25.154576 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:14:25.155624 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:14:25.207120 kernel: raid6: neonx8 gen() 15779 MB/s Jan 29 12:14:25.224103 kernel: raid6: neonx4 gen() 15628 MB/s Jan 29 12:14:25.241100 kernel: raid6: neonx2 gen() 13256 MB/s Jan 29 12:14:25.258102 kernel: raid6: neonx1 gen() 10454 MB/s Jan 29 12:14:25.275099 kernel: raid6: int64x8 gen() 6949 MB/s Jan 29 12:14:25.292102 kernel: raid6: int64x4 gen() 7340 MB/s Jan 29 12:14:25.309098 kernel: raid6: int64x2 gen() 6123 MB/s Jan 29 12:14:25.326225 kernel: raid6: int64x1 gen() 5036 MB/s Jan 29 12:14:25.326248 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Jan 29 12:14:25.344213 kernel: raid6: .... xor() 11904 MB/s, rmw enabled Jan 29 12:14:25.344231 kernel: raid6: using neon recovery algorithm Jan 29 12:14:25.350155 kernel: xor: measuring software checksum speed Jan 29 12:14:25.350170 kernel: 8regs : 19750 MB/sec Jan 29 12:14:25.351519 kernel: 32regs : 18977 MB/sec Jan 29 12:14:25.351532 kernel: arm64_neon : 26901 MB/sec Jan 29 12:14:25.351542 kernel: xor: using function: arm64_neon (26901 MB/sec) Jan 29 12:14:25.404108 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:14:25.415238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:14:25.425244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:14:25.436350 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 12:14:25.439579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:14:25.451250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:14:25.464094 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 29 12:14:25.493467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:14:25.505266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:14:25.544235 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:14:25.555245 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:14:25.566521 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:14:25.568296 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:14:25.572402 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:14:25.573640 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:14:25.581283 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:14:25.591349 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:14:25.595513 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 12:14:25.615856 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 12:14:25.615969 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:14:25.615981 kernel: GPT:9289727 != 19775487 Jan 29 12:14:25.615991 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:14:25.616001 kernel: GPT:9289727 != 19775487 Jan 29 12:14:25.616010 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:14:25.616027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:14:25.605553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:14:25.605673 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:14:25.617344 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:14:25.621602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:14:25.621771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:14:25.623512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:14:25.632381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:14:25.635810 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (513) Jan 29 12:14:25.635832 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (514) Jan 29 12:14:25.647142 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:14:25.648635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:14:25.654915 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:14:25.662936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:14:25.666981 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:14:25.668292 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:14:25.683268 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:14:25.686238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:14:25.689045 disk-uuid[552]: Primary Header is updated. Jan 29 12:14:25.689045 disk-uuid[552]: Secondary Entries is updated. Jan 29 12:14:25.689045 disk-uuid[552]: Secondary Header is updated. Jan 29 12:14:25.693088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:14:25.708311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:14:26.711529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:14:26.711591 disk-uuid[553]: The operation has completed successfully. Jan 29 12:14:26.746419 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:14:26.747109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:14:26.757212 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:14:26.761848 sh[576]: Success Jan 29 12:14:26.781119 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 12:14:26.826569 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:14:26.829827 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:14:26.832140 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:14:26.844813 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 12:14:26.844855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:14:26.844867 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:14:26.847584 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:14:26.847606 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:14:26.850896 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:14:26.852196 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:14:26.852906 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:14:26.855782 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:14:26.865737 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:14:26.865779 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:14:26.865797 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:14:26.868114 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:14:26.876429 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:14:26.877321 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:14:26.882707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:14:26.890220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:14:26.947668 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:14:26.960343 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:14:26.988493 systemd-networkd[762]: lo: Link UP Jan 29 12:14:26.988504 systemd-networkd[762]: lo: Gained carrier Jan 29 12:14:26.989170 systemd-networkd[762]: Enumeration completed Jan 29 12:14:26.989277 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:14:26.991428 ignition[670]: Ignition 2.19.0 Jan 29 12:14:26.989801 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:14:26.991433 ignition[670]: Stage: fetch-offline Jan 29 12:14:26.989805 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:14:26.991469 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:14:26.990549 systemd[1]: Reached target network.target - Network. Jan 29 12:14:26.991477 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:14:26.991502 systemd-networkd[762]: eth0: Link UP Jan 29 12:14:26.991632 ignition[670]: parsed url from cmdline: "" Jan 29 12:14:26.991506 systemd-networkd[762]: eth0: Gained carrier Jan 29 12:14:26.991635 ignition[670]: no config URL provided Jan 29 12:14:26.991513 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:14:26.991640 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:14:26.991646 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:14:26.991666 ignition[670]: op(1): [started] loading QEMU firmware config module Jan 29 12:14:26.991670 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 12:14:26.998293 ignition[670]: op(1): [finished] loading QEMU firmware config module Jan 29 12:14:27.012129 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:14:27.045013 ignition[670]: parsing config with SHA512: 98d83b8e71d2d58c636b138de7e5d1dd167c0832cf02908e2130d91cce660c2fd9224d3f90379543ce9034e47e42a354e6b78bf1d0b58b00061c6ec1981c5475 Jan 29 12:14:27.050305 unknown[670]: fetched base config from "system" Jan 29 12:14:27.050320 unknown[670]: fetched user config from "qemu" Jan 29 12:14:27.050974 ignition[670]: fetch-offline: fetch-offline passed Jan 29 12:14:27.051062 ignition[670]: Ignition finished successfully Jan 29 12:14:27.053552 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:14:27.055435 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:14:27.063302 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:14:27.073518 ignition[774]: Ignition 2.19.0 Jan 29 12:14:27.073528 ignition[774]: Stage: kargs Jan 29 12:14:27.073678 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:14:27.073687 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:14:27.076306 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:14:27.074625 ignition[774]: kargs: kargs passed Jan 29 12:14:27.074667 ignition[774]: Ignition finished successfully Jan 29 12:14:27.093228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:14:27.102258 ignition[782]: Ignition 2.19.0 Jan 29 12:14:27.102268 ignition[782]: Stage: disks Jan 29 12:14:27.102425 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:14:27.102435 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:14:27.104902 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:14:27.103390 ignition[782]: disks: disks passed Jan 29 12:14:27.106116 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:14:27.103437 ignition[782]: Ignition finished successfully Jan 29 12:14:27.107930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:14:27.109902 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:14:27.111307 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:14:27.113075 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:14:27.126223 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:14:27.135707 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:14:27.139146 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:14:27.142959 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:14:27.191022 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:14:27.192632 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 12:14:27.192375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:14:27.205163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:14:27.206875 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:14:27.208418 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:14:27.208460 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:14:27.218578 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jan 29 12:14:27.218605 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:14:27.218616 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:14:27.218627 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:14:27.208482 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:14:27.212848 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:14:27.216685 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:14:27.223730 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:14:27.225260 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:14:27.265891 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:14:27.270468 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:14:27.273668 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:14:27.276939 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:14:27.352597 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:14:27.367188 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:14:27.369572 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:14:27.375095 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:14:27.388138 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:14:27.392532 ignition[914]: INFO : Ignition 2.19.0 Jan 29 12:14:27.392532 ignition[914]: INFO : Stage: mount Jan 29 12:14:27.394155 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:14:27.394155 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:14:27.394155 ignition[914]: INFO : mount: mount passed Jan 29 12:14:27.394155 ignition[914]: INFO : Ignition finished successfully Jan 29 12:14:27.395544 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:14:27.412193 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:14:27.843615 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:14:27.853306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:14:27.860013 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 29 12:14:27.860048 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:14:27.860059 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:14:27.861585 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:14:27.864101 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:14:27.864773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:14:27.882982 ignition[944]: INFO : Ignition 2.19.0 Jan 29 12:14:27.882982 ignition[944]: INFO : Stage: files Jan 29 12:14:27.884668 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:14:27.884668 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:14:27.884668 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:14:27.888443 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:14:27.888443 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:14:27.891219 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:14:27.891219 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:14:27.891219 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:14:27.891219 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:14:27.889189 unknown[944]: wrote ssh authorized keys file for user: core Jan 29 12:14:27.897673 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:14:27.897673 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:14:27.897673 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 12:14:27.945072 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:14:28.055457 systemd-networkd[762]: eth0: Gained IPv6LL Jan 29 12:14:28.136873 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:14:28.138820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:14:28.138820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 12:14:28.458124 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 29 12:14:28.510690 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:14:28.512647 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 12:14:28.750514 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 29 12:14:28.906218 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:14:28.906218 ignition[944]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 29 12:14:28.909997 ignition[944]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 12:14:28.933288 ignition[944]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:14:28.937111 ignition[944]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:14:28.938627 ignition[944]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 12:14:28.938627 ignition[944]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:14:28.938627 ignition[944]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:14:28.938627 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:14:28.938627 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:14:28.938627 ignition[944]: INFO : files: files passed Jan 29 12:14:28.938627 ignition[944]: INFO : Ignition finished successfully Jan 29 12:14:28.938976 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:14:28.952589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:14:28.955051 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:14:28.956577 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:14:28.958104 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:14:28.963005 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 12:14:28.965331 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:14:28.965331 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:14:28.968900 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:14:28.970552 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:14:28.971979 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:14:28.984264 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:14:29.004975 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:14:29.005113 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:14:29.007402 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:14:29.010169 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:14:29.011251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:14:29.012158 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:14:29.027921 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:14:29.030584 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:14:29.042666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:14:29.044985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:14:29.046271 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:14:29.048037 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:14:29.048190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:14:29.050795 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:14:29.052830 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:14:29.054477 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:14:29.056183 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:14:29.058218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:14:29.060194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:14:29.062067 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:14:29.064038 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:14:29.066019 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:14:29.067777 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:14:29.069295 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:14:29.069432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:14:29.071749 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:14:29.073735 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:14:29.075678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:14:29.080152 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:14:29.081451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:14:29.081582 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:14:29.084408 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:14:29.084535 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:14:29.086506 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:14:29.088033 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:14:29.089822 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:14:29.091141 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:14:29.092881 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:14:29.095045 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:14:29.095155 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:14:29.096677 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:14:29.096762 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:14:29.098339 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:14:29.098461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:14:29.100152 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:14:29.100274 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:14:29.113292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:14:29.114244 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:14:29.114389 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:14:29.117098 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:14:29.117935 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:14:29.118098 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:14:29.120265 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:14:29.120382 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:14:29.125330 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:14:29.125417 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:14:29.129619 ignition[998]: INFO : Ignition 2.19.0 Jan 29 12:14:29.129619 ignition[998]: INFO : Stage: umount Jan 29 12:14:29.129619 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:14:29.129619 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:14:29.135294 ignition[998]: INFO : umount: umount passed Jan 29 12:14:29.135294 ignition[998]: INFO : Ignition finished successfully Jan 29 12:14:29.132171 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:14:29.132287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:14:29.135008 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:14:29.135452 systemd[1]: Stopped target network.target - Network. Jan 29 12:14:29.136975 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:14:29.137041 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:14:29.138756 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:14:29.138801 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:14:29.140509 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:14:29.140552 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:14:29.142412 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:14:29.142457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:14:29.144393 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:14:29.146036 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:14:29.157131 systemd-networkd[762]: eth0: DHCPv6 lease lost Jan 29 12:14:29.157676 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:14:29.157806 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:14:29.159996 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:14:29.160130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:14:29.162756 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:14:29.162811 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:14:29.173221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:14:29.174124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:14:29.174192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:14:29.176273 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:14:29.176324 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:14:29.178185 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:14:29.178245 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:14:29.180485 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:14:29.180535 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:14:29.182630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:14:29.194922 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:14:29.195052 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:14:29.202800 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:14:29.202952 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:14:29.204605 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:14:29.204680 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:14:29.206626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:14:29.206692 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:14:29.208172 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:14:29.208209 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:14:29.209924 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:14:29.209980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:14:29.212668 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:14:29.212721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:14:29.215464 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:14:29.215519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:14:29.218353 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:14:29.218403 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:14:29.229253 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:14:29.230310 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:14:29.230382 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:14:29.232490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:14:29.232544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:14:29.237858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:14:29.237970 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:14:29.240359 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:14:29.242920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:14:29.252785 systemd[1]: Switching root. Jan 29 12:14:29.285135 systemd-journald[237]: Journal stopped Jan 29 12:14:30.055428 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 12:14:30.055487 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:14:30.055499 kernel: SELinux: policy capability open_perms=1 Jan 29 12:14:30.055509 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:14:30.055519 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:14:30.055528 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:14:30.055538 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:14:30.055547 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:14:30.055560 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:14:30.055569 kernel: audit: type=1403 audit(1738152869.473:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:14:30.055580 systemd[1]: Successfully loaded SELinux policy in 35.794ms. Jan 29 12:14:30.055600 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.442ms. Jan 29 12:14:30.055611 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:14:30.055622 systemd[1]: Detected virtualization kvm. Jan 29 12:14:30.055633 systemd[1]: Detected architecture arm64. Jan 29 12:14:30.055643 systemd[1]: Detected first boot. Jan 29 12:14:30.055653 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:14:30.055665 zram_generator::config[1059]: No configuration found. Jan 29 12:14:30.055677 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:14:30.055687 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:14:30.055697 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:14:30.055710 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:14:30.055721 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:14:30.055731 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:14:30.055741 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:14:30.055754 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:14:30.055765 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:14:30.055775 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:14:30.055785 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:14:30.055795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:14:30.055806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:14:30.055817 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:14:30.055827 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:14:30.055838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:14:30.055850 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:14:30.055861 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 12:14:30.055871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:14:30.055882 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:14:30.055892 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:14:30.055903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:14:30.055914 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:14:30.055924 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:14:30.055937 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:14:30.055948 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:14:30.055959 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:14:30.055969 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:14:30.055980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:14:30.055991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:14:30.056002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:14:30.056012 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:14:30.056026 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:14:30.056037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:14:30.056048 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:14:30.056059 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:14:30.056069 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:14:30.056162 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:14:30.056174 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:14:30.056184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:14:30.056195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:14:30.056205 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:14:30.056224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:14:30.056236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:14:30.056246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:14:30.056258 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:14:30.056269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:14:30.056280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:14:30.056290 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:14:30.056302 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:14:30.056313 kernel: loop: module loaded Jan 29 12:14:30.056322 kernel: ACPI: bus type drm_connector registered Jan 29 12:14:30.056332 kernel: fuse: init (API version 7.39) Jan 29 12:14:30.056341 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:14:30.056352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:14:30.056363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:14:30.056373 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:14:30.056384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:14:30.056411 systemd-journald[1145]: Collecting audit messages is disabled. Jan 29 12:14:30.056434 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:14:30.056446 systemd-journald[1145]: Journal started Jan 29 12:14:30.056467 systemd-journald[1145]: Runtime Journal (/run/log/journal/c878ba0a1ddd414aa965b04d095fd0c8) is 5.9M, max 47.3M, 41.4M free. Jan 29 12:14:30.061102 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:14:30.061498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:14:30.062756 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:14:30.063974 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:14:30.065201 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:14:30.066460 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:14:30.067784 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:14:30.069305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:14:30.070784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:14:30.070963 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:14:30.072487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:14:30.072656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:14:30.074032 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:14:30.074242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:14:30.075796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:14:30.075963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:14:30.077445 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:14:30.077606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:14:30.078946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:14:30.079412 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:14:30.080871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:14:30.082675 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:14:30.084455 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:14:30.096020 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:14:30.110265 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:14:30.112643 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:14:30.113803 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:14:30.116297 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:14:30.118564 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:14:30.119876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:14:30.122115 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:14:30.123427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:14:30.125298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:14:30.129958 systemd-journald[1145]: Time spent on flushing to /var/log/journal/c878ba0a1ddd414aa965b04d095fd0c8 is 11.739ms for 846 entries. Jan 29 12:14:30.129958 systemd-journald[1145]: System Journal (/var/log/journal/c878ba0a1ddd414aa965b04d095fd0c8) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:14:30.154969 systemd-journald[1145]: Received client request to flush runtime journal. Jan 29 12:14:30.130321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:14:30.134000 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:14:30.135433 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:14:30.136854 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:14:30.144332 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:14:30.146157 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:14:30.148267 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:14:30.158606 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:14:30.160450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:14:30.164874 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:14:30.167441 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 12:14:30.167463 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 12:14:30.171684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:14:30.180274 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:14:30.200446 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:14:30.211278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:14:30.223472 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 29 12:14:30.223493 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 29 12:14:30.227638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:14:30.575097 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:14:30.583334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:14:30.604482 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Jan 29 12:14:30.618558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:14:30.630729 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:14:30.649325 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:14:30.653539 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 29 12:14:30.654110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1233) Jan 29 12:14:30.695086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:14:30.722699 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:14:30.758217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:14:30.767214 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:14:30.770188 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:14:30.797454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:14:30.798225 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:14:30.809424 systemd-networkd[1230]: lo: Link UP Jan 29 12:14:30.809736 systemd-networkd[1230]: lo: Gained carrier Jan 29 12:14:30.810558 systemd-networkd[1230]: Enumeration completed Jan 29 12:14:30.810796 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:14:30.811277 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:14:30.811350 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:14:30.812035 systemd-networkd[1230]: eth0: Link UP Jan 29 12:14:30.812041 systemd-networkd[1230]: eth0: Gained carrier Jan 29 12:14:30.812054 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:14:30.825296 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:14:30.830150 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:14:30.831956 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:14:30.833607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:14:30.836129 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:14:30.843803 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:14:30.876621 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:14:30.878149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:14:30.879410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:14:30.879444 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:14:30.880480 systemd[1]: Reached target machines.target - Containers. Jan 29 12:14:30.882549 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:14:30.895294 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:14:30.897768 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:14:30.899034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:14:30.900138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:14:30.903400 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:14:30.906247 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:14:30.908740 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:14:30.917105 kernel: loop0: detected capacity change from 0 to 194096 Jan 29 12:14:30.920303 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:14:30.923431 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:14:30.927795 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:14:30.926346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:14:30.957111 kernel: loop1: detected capacity change from 0 to 114432 Jan 29 12:14:31.001112 kernel: loop2: detected capacity change from 0 to 114328 Jan 29 12:14:31.043106 kernel: loop3: detected capacity change from 0 to 194096 Jan 29 12:14:31.055113 kernel: loop4: detected capacity change from 0 to 114432 Jan 29 12:14:31.062107 kernel: loop5: detected capacity change from 0 to 114328 Jan 29 12:14:31.065455 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 12:14:31.065853 (sd-merge)[1289]: Merged extensions into '/usr'. Jan 29 12:14:31.069855 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:14:31.069872 systemd[1]: Reloading... Jan 29 12:14:31.112494 zram_generator::config[1318]: No configuration found. Jan 29 12:14:31.156183 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:14:31.212172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:14:31.254665 systemd[1]: Reloading finished in 184 ms. Jan 29 12:14:31.274965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:14:31.276518 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:14:31.300319 systemd[1]: Starting ensure-sysext.service... Jan 29 12:14:31.302389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:14:31.307596 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:14:31.307612 systemd[1]: Reloading... Jan 29 12:14:31.319834 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:14:31.320118 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:14:31.320762 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:14:31.320978 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 29 12:14:31.321023 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 29 12:14:31.324743 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:14:31.324754 systemd-tmpfiles[1361]: Skipping /boot Jan 29 12:14:31.331960 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:14:31.331976 systemd-tmpfiles[1361]: Skipping /boot Jan 29 12:14:31.351119 zram_generator::config[1393]: No configuration found. Jan 29 12:14:31.439233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:14:31.482271 systemd[1]: Reloading finished in 174 ms. Jan 29 12:14:31.495989 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:14:31.508423 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:14:31.511163 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:14:31.513631 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:14:31.517275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:14:31.520266 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:14:31.528449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:14:31.539171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:14:31.542037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:14:31.547191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:14:31.550693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:14:31.551990 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:14:31.554852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:14:31.555043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:14:31.556751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:14:31.556941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:14:31.558679 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:14:31.558929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:14:31.569066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:14:31.580531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:14:31.582868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:14:31.588694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:14:31.589841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:14:31.593364 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:14:31.597032 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:14:31.598309 systemd-resolved[1436]: Positive Trust Anchors: Jan 29 12:14:31.598322 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:14:31.598706 augenrules[1473]: No rules Jan 29 12:14:31.598376 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:14:31.599731 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:14:31.601703 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:14:31.603507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:14:31.603759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:14:31.605492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:14:31.605753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:14:31.606039 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jan 29 12:14:31.607685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:14:31.607992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:14:31.609609 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:14:31.613178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:14:31.620473 systemd[1]: Reached target network.target - Network. Jan 29 12:14:31.621784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:14:31.623296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:14:31.635368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:14:31.637636 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:14:31.639803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:14:31.642491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:14:31.644292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:14:31.644439 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:14:31.645387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:14:31.645553 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:14:31.647425 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:14:31.647714 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:14:31.649476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:14:31.649633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:14:31.651518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:14:31.651720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:14:31.654567 systemd[1]: Finished ensure-sysext.service. Jan 29 12:14:31.659947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:14:31.660016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:14:31.669342 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:14:31.715034 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:14:31.716536 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 12:14:31.716593 systemd-timesyncd[1504]: Initial clock synchronization to Wed 2025-01-29 12:14:31.633139 UTC. Jan 29 12:14:31.716747 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:14:31.717932 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:14:31.719228 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:14:31.720467 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:14:31.721719 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:14:31.721759 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:14:31.722673 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:14:31.723869 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:14:31.725073 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:14:31.726302 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:14:31.727943 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:14:31.730622 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:14:31.732764 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:14:31.738120 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:14:31.739221 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:14:31.740221 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:14:31.741324 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:14:31.741375 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:14:31.741396 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:14:31.742650 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:14:31.744932 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:14:31.746999 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:14:31.752149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:14:31.753161 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:14:31.754596 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:14:31.757211 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:14:31.761800 jq[1510]: false Jan 29 12:14:31.763346 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:14:31.768005 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:14:31.777910 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:14:31.779257 extend-filesystems[1512]: Found loop3 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found loop4 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found loop5 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda1 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda2 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda3 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found usr Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda4 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda6 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda7 Jan 29 12:14:31.779257 extend-filesystems[1512]: Found vda9 Jan 29 12:14:31.798004 extend-filesystems[1512]: Checking size of /dev/vda9 Jan 29 12:14:31.789682 dbus-daemon[1509]: [system] SELinux support is enabled Jan 29 12:14:31.782412 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:14:31.783679 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:14:31.787220 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:14:31.792025 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:14:31.806682 jq[1531]: true Jan 29 12:14:31.805618 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:14:31.806954 extend-filesystems[1512]: Resized partition /dev/vda9 Jan 29 12:14:31.805850 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:14:31.807212 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:14:31.807443 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:14:31.813690 extend-filesystems[1538]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:14:31.822171 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 12:14:31.815047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:14:31.815303 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:14:31.836604 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:14:31.839308 jq[1542]: true Jan 29 12:14:31.842284 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1231) Jan 29 12:14:31.845083 update_engine[1529]: I20250129 12:14:31.842486 1529 main.cc:92] Flatcar Update Engine starting Jan 29 12:14:31.846428 tar[1541]: linux-arm64/helm Jan 29 12:14:31.846614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:14:31.846653 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:14:31.848724 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:14:31.848754 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:14:31.855107 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 12:14:31.857865 update_engine[1529]: I20250129 12:14:31.857738 1529 update_check_scheduler.cc:74] Next update check in 2m22s Jan 29 12:14:31.860074 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:14:31.862815 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:14:31.869297 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:14:31.870903 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:14:31.870903 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:14:31.870903 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 12:14:31.885260 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Jan 29 12:14:31.872204 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:14:31.872218 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 12:14:31.872460 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:14:31.873105 systemd-logind[1526]: New seat seat0. Jan 29 12:14:31.907333 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:14:31.926057 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:14:31.930783 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:14:31.933991 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:14:31.950472 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:14:32.069707 containerd[1546]: time="2025-01-29T12:14:32.069554138Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:14:32.097452 containerd[1546]: time="2025-01-29T12:14:32.097356155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.098909965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.098951290Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.098969815Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099148099Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099175293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099235341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099249156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099442284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099458275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099470665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100057 containerd[1546]: time="2025-01-29T12:14:32.099479888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100335 containerd[1546]: time="2025-01-29T12:14:32.099546863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100335 containerd[1546]: time="2025-01-29T12:14:32.099737299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100335 containerd[1546]: time="2025-01-29T12:14:32.099884115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:14:32.100335 containerd[1546]: time="2025-01-29T12:14:32.099900185Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:14:32.100335 containerd[1546]: time="2025-01-29T12:14:32.099977373Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:14:32.100335 containerd[1546]: time="2025-01-29T12:14:32.100017986Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:14:32.103849 containerd[1546]: time="2025-01-29T12:14:32.103812736Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:14:32.103992 containerd[1546]: time="2025-01-29T12:14:32.103976730Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:14:32.104071 containerd[1546]: time="2025-01-29T12:14:32.104052929Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:14:32.104185 containerd[1546]: time="2025-01-29T12:14:32.104171600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:14:32.104259 containerd[1546]: time="2025-01-29T12:14:32.104246809Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:14:32.104462 containerd[1546]: time="2025-01-29T12:14:32.104439897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:14:32.104931 containerd[1546]: time="2025-01-29T12:14:32.104908763Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:14:32.105158 containerd[1546]: time="2025-01-29T12:14:32.105136487Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:14:32.105252 containerd[1546]: time="2025-01-29T12:14:32.105237702Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:14:32.105315 containerd[1546]: time="2025-01-29T12:14:32.105302421Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:14:32.105365 containerd[1546]: time="2025-01-29T12:14:32.105354236Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105421 containerd[1546]: time="2025-01-29T12:14:32.105409732Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105471 containerd[1546]: time="2025-01-29T12:14:32.105460715Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105528 containerd[1546]: time="2025-01-29T12:14:32.105517043Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105581 containerd[1546]: time="2025-01-29T12:14:32.105569332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105630 containerd[1546]: time="2025-01-29T12:14:32.105619722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105680 containerd[1546]: time="2025-01-29T12:14:32.105668925Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105737 containerd[1546]: time="2025-01-29T12:14:32.105725687Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:14:32.105795 containerd[1546]: time="2025-01-29T12:14:32.105783875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.105849 containerd[1546]: time="2025-01-29T12:14:32.105837352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.105901 containerd[1546]: time="2025-01-29T12:14:32.105889325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.105961 containerd[1546]: time="2025-01-29T12:14:32.105948978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106017 containerd[1546]: time="2025-01-29T12:14:32.106005543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106088 containerd[1546]: time="2025-01-29T12:14:32.106055893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106146 containerd[1546]: time="2025-01-29T12:14:32.106132843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106249 containerd[1546]: time="2025-01-29T12:14:32.106235760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106303 containerd[1546]: time="2025-01-29T12:14:32.106292127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106379 containerd[1546]: time="2025-01-29T12:14:32.106365159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106430 containerd[1546]: time="2025-01-29T12:14:32.106418596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106478 containerd[1546]: time="2025-01-29T12:14:32.106467403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106533 containerd[1546]: time="2025-01-29T12:14:32.106521355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106586 containerd[1546]: time="2025-01-29T12:14:32.106575703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:14:32.106651 containerd[1546]: time="2025-01-29T12:14:32.106638918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106702 containerd[1546]: time="2025-01-29T12:14:32.106691168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.106765 containerd[1546]: time="2025-01-29T12:14:32.106751177Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:14:32.106931 containerd[1546]: time="2025-01-29T12:14:32.106916873Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:14:32.106995 containerd[1546]: time="2025-01-29T12:14:32.106979494Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:14:32.107059 containerd[1546]: time="2025-01-29T12:14:32.107046905Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:14:32.107137 containerd[1546]: time="2025-01-29T12:14:32.107121836Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:14:32.107182 containerd[1546]: time="2025-01-29T12:14:32.107171514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.107252 containerd[1546]: time="2025-01-29T12:14:32.107238449Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:14:32.107307 containerd[1546]: time="2025-01-29T12:14:32.107296598Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:14:32.107359 containerd[1546]: time="2025-01-29T12:14:32.107348412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:14:32.107792 containerd[1546]: time="2025-01-29T12:14:32.107729958Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:14:32.107949 containerd[1546]: time="2025-01-29T12:14:32.107933654Z" level=info msg="Connect containerd service" Jan 29 12:14:32.108037 containerd[1546]: time="2025-01-29T12:14:32.108023548Z" level=info msg="using legacy CRI server" Jan 29 12:14:32.108118 containerd[1546]: time="2025-01-29T12:14:32.108104259Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:14:32.108254 containerd[1546]: time="2025-01-29T12:14:32.108239318Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:14:32.109019 containerd[1546]: time="2025-01-29T12:14:32.108990137Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:14:32.109433 containerd[1546]: time="2025-01-29T12:14:32.109299997Z" level=info msg="Start subscribing containerd event" Jan 29 12:14:32.109433 containerd[1546]: time="2025-01-29T12:14:32.109372158Z" level=info msg="Start recovering state" Jan 29 12:14:32.109487 containerd[1546]: time="2025-01-29T12:14:32.109442933Z" level=info msg="Start event monitor" Jan 29 12:14:32.109487 containerd[1546]: time="2025-01-29T12:14:32.109454571Z" level=info msg="Start snapshots syncer" Jan 29 12:14:32.109487 containerd[1546]: time="2025-01-29T12:14:32.109466604Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:14:32.109487 containerd[1546]: time="2025-01-29T12:14:32.109473966Z" level=info msg="Start streaming server" Jan 29 12:14:32.111553 containerd[1546]: time="2025-01-29T12:14:32.109782757Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:14:32.111553 containerd[1546]: time="2025-01-29T12:14:32.109841578Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:14:32.109999 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:14:32.112710 containerd[1546]: time="2025-01-29T12:14:32.111991555Z" level=info msg="containerd successfully booted in 0.045590s" Jan 29 12:14:32.210550 tar[1541]: linux-arm64/LICENSE Jan 29 12:14:32.210649 tar[1541]: linux-arm64/README.md Jan 29 12:14:32.227743 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:14:32.325142 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:14:32.343657 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:14:32.356331 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:14:32.361635 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:14:32.361850 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:14:32.364531 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:14:32.375638 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:14:32.378434 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:14:32.380450 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 12:14:32.381696 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:14:32.535249 systemd-networkd[1230]: eth0: Gained IPv6LL Jan 29 12:14:32.537685 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:14:32.539544 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:14:32.552346 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:14:32.555166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:32.557514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:14:32.574665 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:14:32.574899 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:14:32.576741 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:14:32.579667 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:14:33.037268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:33.038907 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:14:33.040243 systemd[1]: Startup finished in 5.318s (kernel) + 3.602s (userspace) = 8.921s. Jan 29 12:14:33.040790 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:14:33.504192 kubelet[1645]: E0129 12:14:33.504030 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:14:33.506395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:14:33.506587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:14:37.499972 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:14:37.517354 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:42362.service - OpenSSH per-connection server daemon (10.0.0.1:42362). Jan 29 12:14:37.567131 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 42362 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:37.568699 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:37.581048 systemd-logind[1526]: New session 1 of user core. Jan 29 12:14:37.581992 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:14:37.593316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:14:37.603517 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:14:37.606029 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:14:37.613018 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:14:37.696900 systemd[1665]: Queued start job for default target default.target. Jan 29 12:14:37.697304 systemd[1665]: Created slice app.slice - User Application Slice. Jan 29 12:14:37.697330 systemd[1665]: Reached target paths.target - Paths. Jan 29 12:14:37.697341 systemd[1665]: Reached target timers.target - Timers. Jan 29 12:14:37.710197 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:14:37.716553 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:14:37.716626 systemd[1665]: Reached target sockets.target - Sockets. Jan 29 12:14:37.716639 systemd[1665]: Reached target basic.target - Basic System. Jan 29 12:14:37.716681 systemd[1665]: Reached target default.target - Main User Target. Jan 29 12:14:37.716710 systemd[1665]: Startup finished in 98ms. Jan 29 12:14:37.717104 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:14:37.718628 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:14:37.775587 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:42378.service - OpenSSH per-connection server daemon (10.0.0.1:42378). Jan 29 12:14:37.815236 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 42378 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:37.816593 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:37.822894 systemd-logind[1526]: New session 2 of user core. Jan 29 12:14:37.834400 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:14:37.889039 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:37.899360 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:42380.service - OpenSSH per-connection server daemon (10.0.0.1:42380). Jan 29 12:14:37.899750 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:42378.service: Deactivated successfully. Jan 29 12:14:37.901926 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:14:37.902467 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:14:37.904110 systemd-logind[1526]: Removed session 2. Jan 29 12:14:37.931099 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 42380 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:37.932393 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:37.937229 systemd-logind[1526]: New session 3 of user core. Jan 29 12:14:37.948413 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:14:37.997623 sshd[1682]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:38.008449 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:42396.service - OpenSSH per-connection server daemon (10.0.0.1:42396). Jan 29 12:14:38.009322 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:42380.service: Deactivated successfully. Jan 29 12:14:38.010810 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:14:38.011407 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:14:38.012596 systemd-logind[1526]: Removed session 3. Jan 29 12:14:38.039978 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 42396 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:38.041514 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:38.045865 systemd-logind[1526]: New session 4 of user core. Jan 29 12:14:38.054357 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:14:38.106787 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:38.126367 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:42412.service - OpenSSH per-connection server daemon (10.0.0.1:42412). Jan 29 12:14:38.126789 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:42396.service: Deactivated successfully. Jan 29 12:14:38.128301 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:14:38.128855 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:14:38.130039 systemd-logind[1526]: Removed session 4. Jan 29 12:14:38.157489 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 42412 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:38.159026 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:38.163151 systemd-logind[1526]: New session 5 of user core. Jan 29 12:14:38.175310 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:14:38.238208 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:14:38.238476 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:14:38.256873 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 29 12:14:38.258558 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:38.267298 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:42428.service - OpenSSH per-connection server daemon (10.0.0.1:42428). Jan 29 12:14:38.267676 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:42412.service: Deactivated successfully. Jan 29 12:14:38.269388 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:14:38.269942 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:14:38.271301 systemd-logind[1526]: Removed session 5. Jan 29 12:14:38.298855 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 42428 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:38.300304 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:38.304597 systemd-logind[1526]: New session 6 of user core. Jan 29 12:14:38.316376 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:14:38.368037 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:14:38.368354 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:14:38.371602 sudo[1715]: pam_unix(sudo:session): session closed for user root Jan 29 12:14:38.376647 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:14:38.376936 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:14:38.401395 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:14:38.402836 auditctl[1718]: No rules Jan 29 12:14:38.403699 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:14:38.403981 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:14:38.405796 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:14:38.430536 augenrules[1737]: No rules Jan 29 12:14:38.431852 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:14:38.432889 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 29 12:14:38.434602 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 29 12:14:38.446352 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:42434.service - OpenSSH per-connection server daemon (10.0.0.1:42434). Jan 29 12:14:38.446868 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:42428.service: Deactivated successfully. Jan 29 12:14:38.448357 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:14:38.449616 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:14:38.450558 systemd-logind[1526]: Removed session 6. Jan 29 12:14:38.477832 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 42434 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:14:38.479146 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:14:38.483160 systemd-logind[1526]: New session 7 of user core. Jan 29 12:14:38.489471 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:14:38.539951 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:14:38.540617 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:14:38.843385 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:14:38.843521 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:14:39.095878 dockerd[1768]: time="2025-01-29T12:14:39.095744998Z" level=info msg="Starting up" Jan 29 12:14:39.164760 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2923569281-merged.mount: Deactivated successfully. Jan 29 12:14:39.339103 dockerd[1768]: time="2025-01-29T12:14:39.338788098Z" level=info msg="Loading containers: start." Jan 29 12:14:39.420103 kernel: Initializing XFRM netlink socket Jan 29 12:14:39.485177 systemd-networkd[1230]: docker0: Link UP Jan 29 12:14:39.506460 dockerd[1768]: time="2025-01-29T12:14:39.506405963Z" level=info msg="Loading containers: done." Jan 29 12:14:39.520306 dockerd[1768]: time="2025-01-29T12:14:39.520245141Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:14:39.520496 dockerd[1768]: time="2025-01-29T12:14:39.520363455Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:14:39.520496 dockerd[1768]: time="2025-01-29T12:14:39.520472009Z" level=info msg="Daemon has completed initialization" Jan 29 12:14:39.548773 dockerd[1768]: time="2025-01-29T12:14:39.548703973Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:14:39.549034 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:14:40.162443 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3609908233-merged.mount: Deactivated successfully. Jan 29 12:14:40.312449 containerd[1546]: time="2025-01-29T12:14:40.312408167Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:14:41.080892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687857472.mount: Deactivated successfully. Jan 29 12:14:42.825630 containerd[1546]: time="2025-01-29T12:14:42.825282833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:42.826043 containerd[1546]: time="2025-01-29T12:14:42.825696058Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 29 12:14:42.826569 containerd[1546]: time="2025-01-29T12:14:42.826541456Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:42.829462 containerd[1546]: time="2025-01-29T12:14:42.829434469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:42.830957 containerd[1546]: time="2025-01-29T12:14:42.830670195Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.518216025s" Jan 29 12:14:42.830957 containerd[1546]: time="2025-01-29T12:14:42.830711521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 12:14:42.849120 containerd[1546]: time="2025-01-29T12:14:42.849088123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:14:43.756827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:14:43.772335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:43.862481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:43.866833 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:14:43.907955 kubelet[1997]: E0129 12:14:43.907876 1997 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:14:43.910833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:14:43.911023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:14:44.774009 containerd[1546]: time="2025-01-29T12:14:44.773960023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:44.774986 containerd[1546]: time="2025-01-29T12:14:44.774937728Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 29 12:14:44.776096 containerd[1546]: time="2025-01-29T12:14:44.775541100Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:44.778649 containerd[1546]: time="2025-01-29T12:14:44.778602865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:44.780270 containerd[1546]: time="2025-01-29T12:14:44.780157397Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.930962322s" Jan 29 12:14:44.780270 containerd[1546]: time="2025-01-29T12:14:44.780189570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 12:14:44.798127 containerd[1546]: time="2025-01-29T12:14:44.798094618Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:14:46.171325 containerd[1546]: time="2025-01-29T12:14:46.171278365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:46.172316 containerd[1546]: time="2025-01-29T12:14:46.172107072Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 29 12:14:46.172983 containerd[1546]: time="2025-01-29T12:14:46.172942009Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:46.176160 containerd[1546]: time="2025-01-29T12:14:46.176120937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:46.177346 containerd[1546]: time="2025-01-29T12:14:46.177309505Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.379177001s" Jan 29 12:14:46.177421 containerd[1546]: time="2025-01-29T12:14:46.177349041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 12:14:46.196165 containerd[1546]: time="2025-01-29T12:14:46.196126281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:14:47.569129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702629276.mount: Deactivated successfully. Jan 29 12:14:47.863356 containerd[1546]: time="2025-01-29T12:14:47.863218400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:47.864171 containerd[1546]: time="2025-01-29T12:14:47.864126882Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 12:14:47.865187 containerd[1546]: time="2025-01-29T12:14:47.865155075Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:47.867312 containerd[1546]: time="2025-01-29T12:14:47.867284799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:47.868014 containerd[1546]: time="2025-01-29T12:14:47.867980500Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.671812483s" Jan 29 12:14:47.868094 containerd[1546]: time="2025-01-29T12:14:47.868018406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 12:14:47.885796 containerd[1546]: time="2025-01-29T12:14:47.885766355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:14:48.734668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012815002.mount: Deactivated successfully. Jan 29 12:14:49.738330 containerd[1546]: time="2025-01-29T12:14:49.738279417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:49.739274 containerd[1546]: time="2025-01-29T12:14:49.738967915Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 12:14:49.740096 containerd[1546]: time="2025-01-29T12:14:49.739836540Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:49.742949 containerd[1546]: time="2025-01-29T12:14:49.742913866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:49.744255 containerd[1546]: time="2025-01-29T12:14:49.744200600Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.858398093s" Jan 29 12:14:49.744255 containerd[1546]: time="2025-01-29T12:14:49.744240398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 12:14:49.762217 containerd[1546]: time="2025-01-29T12:14:49.762182116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:14:50.348462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068610488.mount: Deactivated successfully. Jan 29 12:14:50.354179 containerd[1546]: time="2025-01-29T12:14:50.354059053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:50.355007 containerd[1546]: time="2025-01-29T12:14:50.354972952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 29 12:14:50.356086 containerd[1546]: time="2025-01-29T12:14:50.356046940Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:50.358548 containerd[1546]: time="2025-01-29T12:14:50.358512897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:50.359114 containerd[1546]: time="2025-01-29T12:14:50.359031288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 596.689543ms" Jan 29 12:14:50.359114 containerd[1546]: time="2025-01-29T12:14:50.359057104Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 12:14:50.376181 containerd[1546]: time="2025-01-29T12:14:50.376148321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:14:51.026944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533448399.mount: Deactivated successfully. Jan 29 12:14:53.764596 containerd[1546]: time="2025-01-29T12:14:53.764539551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:53.765218 containerd[1546]: time="2025-01-29T12:14:53.765191979Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 29 12:14:53.765982 containerd[1546]: time="2025-01-29T12:14:53.765936430Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:53.768997 containerd[1546]: time="2025-01-29T12:14:53.768954326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:14:53.771251 containerd[1546]: time="2025-01-29T12:14:53.771219496Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.395037484s" Jan 29 12:14:53.771328 containerd[1546]: time="2025-01-29T12:14:53.771254314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 12:14:54.161348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:14:54.170341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:54.321805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:54.325968 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:14:54.361847 kubelet[2188]: E0129 12:14:54.361799 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:14:54.364308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:14:54.364649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:14:57.844889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:57.859264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:57.877200 systemd[1]: Reloading requested from client PID 2250 ('systemctl') (unit session-7.scope)... Jan 29 12:14:57.877216 systemd[1]: Reloading... Jan 29 12:14:57.939114 zram_generator::config[2291]: No configuration found. Jan 29 12:14:58.160059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:14:58.208127 systemd[1]: Reloading finished in 330 ms. Jan 29 12:14:58.236674 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:14:58.236834 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:14:58.237246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:58.239243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:14:58.325947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:14:58.330728 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:14:58.368327 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:14:58.368327 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:14:58.368327 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:14:58.368630 kubelet[2347]: I0129 12:14:58.368422 2347 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:14:59.376361 kubelet[2347]: I0129 12:14:59.376312 2347 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:14:59.376361 kubelet[2347]: I0129 12:14:59.376341 2347 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:14:59.376707 kubelet[2347]: I0129 12:14:59.376540 2347 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:14:59.398705 kubelet[2347]: I0129 12:14:59.398605 2347 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:14:59.398773 kubelet[2347]: E0129 12:14:59.398758 2347 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.408051 kubelet[2347]: I0129 12:14:59.408034 2347 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:14:59.408492 kubelet[2347]: I0129 12:14:59.408470 2347 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:14:59.408640 kubelet[2347]: I0129 12:14:59.408495 2347 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:14:59.408720 kubelet[2347]: I0129 12:14:59.408714 2347 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:14:59.408743 kubelet[2347]: I0129 12:14:59.408722 2347 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:14:59.408905 kubelet[2347]: I0129 12:14:59.408894 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:14:59.409833 kubelet[2347]: I0129 12:14:59.409817 2347 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:14:59.409872 kubelet[2347]: I0129 12:14:59.409837 2347 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:14:59.410092 kubelet[2347]: I0129 12:14:59.410034 2347 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:14:59.410092 kubelet[2347]: I0129 12:14:59.410056 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:14:59.411488 kubelet[2347]: W0129 12:14:59.411152 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.411488 kubelet[2347]: E0129 12:14:59.411278 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.411704 kubelet[2347]: I0129 12:14:59.411680 2347 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:14:59.411938 kubelet[2347]: W0129 12:14:59.411902 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.412046 kubelet[2347]: E0129 12:14:59.412015 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.412046 kubelet[2347]: I0129 12:14:59.412031 2347 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:14:59.413197 kubelet[2347]: W0129 12:14:59.412191 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:14:59.413197 kubelet[2347]: I0129 12:14:59.412963 2347 server.go:1264] "Started kubelet" Jan 29 12:14:59.417051 kubelet[2347]: I0129 12:14:59.414774 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:14:59.417051 kubelet[2347]: I0129 12:14:59.415122 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:14:59.417051 kubelet[2347]: I0129 12:14:59.415790 2347 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:14:59.417051 kubelet[2347]: I0129 12:14:59.416648 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:14:59.417051 kubelet[2347]: I0129 12:14:59.416859 2347 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:14:59.420302 kubelet[2347]: E0129 12:14:59.420057 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f28da56d2e9e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:14:59.41294538 +0000 UTC m=+1.079196427,LastTimestamp:2025-01-29 12:14:59.41294538 +0000 UTC m=+1.079196427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.420394 2347 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.420472 2347 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.420639 2347 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:14:59.435223 kubelet[2347]: W0129 12:14:59.420939 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.435223 kubelet[2347]: E0129 12:14:59.420980 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.435223 kubelet[2347]: E0129 12:14:59.425056 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.426408 2347 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.426483 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:14:59.435223 kubelet[2347]: E0129 12:14:59.427476 2347 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.427581 2347 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.434163 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:14:59.435223 kubelet[2347]: I0129 12:14:59.435215 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:14:59.436391 kubelet[2347]: I0129 12:14:59.435370 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:14:59.436391 kubelet[2347]: I0129 12:14:59.435389 2347 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:14:59.436391 kubelet[2347]: E0129 12:14:59.435425 2347 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:14:59.441895 kubelet[2347]: W0129 12:14:59.441852 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.441895 kubelet[2347]: E0129 12:14:59.441897 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:14:59.445391 kubelet[2347]: I0129 12:14:59.445349 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:14:59.446154 kubelet[2347]: I0129 12:14:59.446136 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:14:59.446199 kubelet[2347]: I0129 12:14:59.446163 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:14:59.517936 kubelet[2347]: I0129 12:14:59.517913 2347 policy_none.go:49] "None policy: Start" Jan 29 12:14:59.518628 kubelet[2347]: I0129 12:14:59.518615 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:14:59.518725 kubelet[2347]: I0129 12:14:59.518639 2347 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:14:59.521931 kubelet[2347]: I0129 12:14:59.521913 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:14:59.523210 kubelet[2347]: I0129 12:14:59.523177 2347 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:14:59.523423 kubelet[2347]: I0129 12:14:59.523383 2347 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:14:59.523500 kubelet[2347]: I0129 12:14:59.523489 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:14:59.525014 kubelet[2347]: E0129 12:14:59.524995 2347 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 12:14:59.525806 kubelet[2347]: E0129 12:14:59.525775 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jan 29 12:14:59.536030 kubelet[2347]: I0129 12:14:59.535988 2347 topology_manager.go:215] "Topology Admit Handler" podUID="3985d4628b6218db675a056fb54aabfb" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:14:59.536957 kubelet[2347]: I0129 12:14:59.536922 2347 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:14:59.538268 kubelet[2347]: I0129 12:14:59.538239 2347 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:14:59.626584 kubelet[2347]: E0129 12:14:59.626465 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Jan 29 12:14:59.721853 kubelet[2347]: I0129 12:14:59.721732 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:14:59.721853 kubelet[2347]: I0129 12:14:59.721768 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:14:59.721853 kubelet[2347]: I0129 12:14:59.721789 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:14:59.721853 kubelet[2347]: I0129 12:14:59.721806 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3985d4628b6218db675a056fb54aabfb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3985d4628b6218db675a056fb54aabfb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:14:59.721853 kubelet[2347]: I0129 12:14:59.721823 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3985d4628b6218db675a056fb54aabfb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3985d4628b6218db675a056fb54aabfb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:14:59.722097 kubelet[2347]: I0129 12:14:59.721843 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:14:59.722097 kubelet[2347]: I0129 12:14:59.721859 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:14:59.722097 kubelet[2347]: I0129 12:14:59.721873 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3985d4628b6218db675a056fb54aabfb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3985d4628b6218db675a056fb54aabfb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:14:59.722097 kubelet[2347]: I0129 12:14:59.721887 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:14:59.727812 kubelet[2347]: I0129 12:14:59.727774 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:14:59.728134 kubelet[2347]: E0129 12:14:59.728110 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jan 29 12:14:59.842157 kubelet[2347]: E0129 12:14:59.842130 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:14:59.844446 kubelet[2347]: E0129 12:14:59.844412 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:14:59.844446 kubelet[2347]: E0129 12:14:59.844434 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:14:59.844544 containerd[1546]: time="2025-01-29T12:14:59.844498216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 12:14:59.844836 containerd[1546]: time="2025-01-29T12:14:59.844755903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 12:14:59.844890 containerd[1546]: time="2025-01-29T12:14:59.844756583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3985d4628b6218db675a056fb54aabfb,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:00.027053 kubelet[2347]: E0129 12:15:00.026937 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Jan 29 12:15:00.129325 kubelet[2347]: I0129 12:15:00.129293 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:15:00.129637 kubelet[2347]: E0129 12:15:00.129615 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jan 29 12:15:00.417505 kubelet[2347]: W0129 12:15:00.417359 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:15:00.417505 kubelet[2347]: E0129 12:15:00.417419 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:15:00.482573 kubelet[2347]: W0129 12:15:00.482496 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:15:00.482573 kubelet[2347]: E0129 12:15:00.482554 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:15:00.487778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492389099.mount: Deactivated successfully. Jan 29 12:15:00.493066 containerd[1546]: time="2025-01-29T12:15:00.492950415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:00.494307 containerd[1546]: time="2025-01-29T12:15:00.494280566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:15:00.494961 containerd[1546]: time="2025-01-29T12:15:00.494858743Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:00.497099 containerd[1546]: time="2025-01-29T12:15:00.495831502Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:00.497353 containerd[1546]: time="2025-01-29T12:15:00.497319213Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:00.498060 containerd[1546]: time="2025-01-29T12:15:00.498035476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:15:00.498750 containerd[1546]: time="2025-01-29T12:15:00.498697832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 12:15:00.501501 containerd[1546]: time="2025-01-29T12:15:00.499626162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:00.501915 containerd[1546]: time="2025-01-29T12:15:00.501870206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.290373ms" Jan 29 12:15:00.503188 containerd[1546]: time="2025-01-29T12:15:00.503163326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 658.353078ms" Jan 29 12:15:00.505197 containerd[1546]: time="2025-01-29T12:15:00.505072933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.170231ms" Jan 29 12:15:00.617565 kubelet[2347]: W0129 12:15:00.617500 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:15:00.617565 kubelet[2347]: E0129 12:15:00.617567 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jan 29 12:15:00.672984 containerd[1546]: time="2025-01-29T12:15:00.672724128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:00.672984 containerd[1546]: time="2025-01-29T12:15:00.672805188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:00.672984 containerd[1546]: time="2025-01-29T12:15:00.672817665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:00.673320 containerd[1546]: time="2025-01-29T12:15:00.673253317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:00.673712 containerd[1546]: time="2025-01-29T12:15:00.673642820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:00.673865 containerd[1546]: time="2025-01-29T12:15:00.673705205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:00.673865 containerd[1546]: time="2025-01-29T12:15:00.673720761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:00.674956 containerd[1546]: time="2025-01-29T12:15:00.674224236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:00.674956 containerd[1546]: time="2025-01-29T12:15:00.674259667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:00.674956 containerd[1546]: time="2025-01-29T12:15:00.674269425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:00.674956 containerd[1546]: time="2025-01-29T12:15:00.674335409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:00.675150 containerd[1546]: time="2025-01-29T12:15:00.673846690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:00.714968 containerd[1546]: time="2025-01-29T12:15:00.714923556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3985d4628b6218db675a056fb54aabfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f88076713b504e90826bbc1dc09ee4d5a96c683ef9d588d329d9cc3ffb4d3e93\"" Jan 29 12:15:00.716370 kubelet[2347]: E0129 12:15:00.716338 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:00.717124 containerd[1546]: time="2025-01-29T12:15:00.717090099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7e1919585812b09a98a5382a33fb871099c44cc4e9fd705de71cb1756b6f893\"" Jan 29 12:15:00.718599 kubelet[2347]: E0129 12:15:00.718572 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:00.719581 containerd[1546]: time="2025-01-29T12:15:00.719553969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ecc70d7c2dedbcd566e21747479f301eccb2a6ce3ed234ead413c449fc00093\"" Jan 29 12:15:00.720110 kubelet[2347]: E0129 12:15:00.720064 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:00.721048 containerd[1546]: time="2025-01-29T12:15:00.721015967Z" level=info msg="CreateContainer within sandbox \"f88076713b504e90826bbc1dc09ee4d5a96c683ef9d588d329d9cc3ffb4d3e93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:15:00.721860 containerd[1546]: time="2025-01-29T12:15:00.721833644Z" level=info msg="CreateContainer within sandbox \"a7e1919585812b09a98a5382a33fb871099c44cc4e9fd705de71cb1756b6f893\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:15:00.723271 containerd[1546]: time="2025-01-29T12:15:00.723239696Z" level=info msg="CreateContainer within sandbox \"6ecc70d7c2dedbcd566e21747479f301eccb2a6ce3ed234ead413c449fc00093\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:15:00.737950 containerd[1546]: time="2025-01-29T12:15:00.737884748Z" level=info msg="CreateContainer within sandbox \"a7e1919585812b09a98a5382a33fb871099c44cc4e9fd705de71cb1756b6f893\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ffc423c199ecf7a1b5d994fe09293e326f0481530ee285ec63ebfa03bfbaab6\"" Jan 29 12:15:00.738671 containerd[1546]: time="2025-01-29T12:15:00.738474282Z" level=info msg="StartContainer for \"9ffc423c199ecf7a1b5d994fe09293e326f0481530ee285ec63ebfa03bfbaab6\"" Jan 29 12:15:00.741003 containerd[1546]: time="2025-01-29T12:15:00.740951549Z" level=info msg="CreateContainer within sandbox \"f88076713b504e90826bbc1dc09ee4d5a96c683ef9d588d329d9cc3ffb4d3e93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95fe4378dcc7576c1239629005e4ca8e15d82450068d042e5046934338682739\"" Jan 29 12:15:00.741534 containerd[1546]: time="2025-01-29T12:15:00.741411035Z" level=info msg="CreateContainer within sandbox \"6ecc70d7c2dedbcd566e21747479f301eccb2a6ce3ed234ead413c449fc00093\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"393bed2c37d3a05d6dec7678084691d87805aeaeb1eb0731dc58a4a26b154bd3\"" Jan 29 12:15:00.741636 containerd[1546]: time="2025-01-29T12:15:00.741611385Z" level=info msg="StartContainer for \"95fe4378dcc7576c1239629005e4ca8e15d82450068d042e5046934338682739\"" Jan 29 12:15:00.741799 containerd[1546]: time="2025-01-29T12:15:00.741765987Z" level=info msg="StartContainer for \"393bed2c37d3a05d6dec7678084691d87805aeaeb1eb0731dc58a4a26b154bd3\"" Jan 29 12:15:00.810129 containerd[1546]: time="2025-01-29T12:15:00.805981562Z" level=info msg="StartContainer for \"9ffc423c199ecf7a1b5d994fe09293e326f0481530ee285ec63ebfa03bfbaab6\" returns successfully" Jan 29 12:15:00.810129 containerd[1546]: time="2025-01-29T12:15:00.806148041Z" level=info msg="StartContainer for \"95fe4378dcc7576c1239629005e4ca8e15d82450068d042e5046934338682739\" returns successfully" Jan 29 12:15:00.810129 containerd[1546]: time="2025-01-29T12:15:00.806188591Z" level=info msg="StartContainer for \"393bed2c37d3a05d6dec7678084691d87805aeaeb1eb0731dc58a4a26b154bd3\" returns successfully" Jan 29 12:15:00.827710 kubelet[2347]: E0129 12:15:00.827653 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Jan 29 12:15:00.931872 kubelet[2347]: I0129 12:15:00.931394 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:15:00.931872 kubelet[2347]: E0129 12:15:00.931710 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jan 29 12:15:01.448379 kubelet[2347]: E0129 12:15:01.448351 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:01.453408 kubelet[2347]: E0129 12:15:01.452940 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:01.454067 kubelet[2347]: E0129 12:15:01.454050 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:02.431900 kubelet[2347]: E0129 12:15:02.431857 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 12:15:02.455752 kubelet[2347]: E0129 12:15:02.455587 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:02.533438 kubelet[2347]: I0129 12:15:02.533154 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:15:02.548051 kubelet[2347]: I0129 12:15:02.547934 2347 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:15:02.553966 kubelet[2347]: E0129 12:15:02.553933 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:02.654388 kubelet[2347]: E0129 12:15:02.654344 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:02.755068 kubelet[2347]: E0129 12:15:02.754976 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:02.855485 kubelet[2347]: E0129 12:15:02.855450 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:02.955744 kubelet[2347]: E0129 12:15:02.955702 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:03.056453 kubelet[2347]: E0129 12:15:03.056353 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:03.157552 kubelet[2347]: E0129 12:15:03.157475 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:03.258223 kubelet[2347]: E0129 12:15:03.258142 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:03.359278 kubelet[2347]: E0129 12:15:03.359149 2347 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:15:04.287616 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-7.scope)... Jan 29 12:15:04.287633 systemd[1]: Reloading... Jan 29 12:15:04.339113 zram_generator::config[2664]: No configuration found. Jan 29 12:15:04.412948 kubelet[2347]: I0129 12:15:04.412905 2347 apiserver.go:52] "Watching apiserver" Jan 29 12:15:04.421529 kubelet[2347]: I0129 12:15:04.421488 2347 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:15:04.425037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:15:04.481134 systemd[1]: Reloading finished in 193 ms. Jan 29 12:15:04.506728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:15:04.525007 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:15:04.525371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:04.540304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:15:04.628916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:04.634050 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:15:04.671553 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:15:04.672017 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:15:04.672017 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:15:04.672198 kubelet[2713]: I0129 12:15:04.672050 2713 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:15:04.676549 kubelet[2713]: I0129 12:15:04.676520 2713 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:15:04.676549 kubelet[2713]: I0129 12:15:04.676541 2713 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:15:04.676748 kubelet[2713]: I0129 12:15:04.676726 2713 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:15:04.678192 kubelet[2713]: I0129 12:15:04.677956 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:15:04.679129 kubelet[2713]: I0129 12:15:04.679001 2713 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:15:04.687181 kubelet[2713]: I0129 12:15:04.687146 2713 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:15:04.688195 kubelet[2713]: I0129 12:15:04.688116 2713 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:15:04.688447 kubelet[2713]: I0129 12:15:04.688161 2713 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:15:04.688557 kubelet[2713]: I0129 12:15:04.688546 2713 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:15:04.688609 kubelet[2713]: I0129 12:15:04.688601 2713 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:15:04.688685 kubelet[2713]: I0129 12:15:04.688678 2713 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:15:04.688869 kubelet[2713]: I0129 12:15:04.688820 2713 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:15:04.688869 kubelet[2713]: I0129 12:15:04.688835 2713 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:15:04.690685 kubelet[2713]: I0129 12:15:04.690592 2713 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:15:04.690685 kubelet[2713]: I0129 12:15:04.690631 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:15:04.691678 kubelet[2713]: I0129 12:15:04.691661 2713 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:15:04.693916 kubelet[2713]: I0129 12:15:04.693255 2713 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:15:04.693916 kubelet[2713]: I0129 12:15:04.693584 2713 server.go:1264] "Started kubelet" Jan 29 12:15:04.694221 kubelet[2713]: I0129 12:15:04.694185 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:15:04.695452 kubelet[2713]: I0129 12:15:04.695402 2713 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:15:04.695572 kubelet[2713]: I0129 12:15:04.695551 2713 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:15:04.696421 kubelet[2713]: I0129 12:15:04.696405 2713 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:15:04.701202 kubelet[2713]: I0129 12:15:04.701187 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:15:04.707704 kubelet[2713]: I0129 12:15:04.707688 2713 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:15:04.708644 kubelet[2713]: I0129 12:15:04.708627 2713 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:15:04.709027 kubelet[2713]: I0129 12:15:04.709009 2713 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:15:04.713790 kubelet[2713]: I0129 12:15:04.712712 2713 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:15:04.713864 kubelet[2713]: I0129 12:15:04.713840 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:15:04.714302 kubelet[2713]: E0129 12:15:04.714221 2713 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:15:04.715608 kubelet[2713]: I0129 12:15:04.715558 2713 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:15:04.720153 kubelet[2713]: I0129 12:15:04.720126 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:15:04.721411 kubelet[2713]: I0129 12:15:04.721394 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:15:04.721745 kubelet[2713]: I0129 12:15:04.721505 2713 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:15:04.721745 kubelet[2713]: I0129 12:15:04.721524 2713 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:15:04.721745 kubelet[2713]: E0129 12:15:04.721564 2713 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:15:04.755228 kubelet[2713]: I0129 12:15:04.755203 2713 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:15:04.755228 kubelet[2713]: I0129 12:15:04.755222 2713 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:15:04.755364 kubelet[2713]: I0129 12:15:04.755241 2713 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:15:04.755386 kubelet[2713]: I0129 12:15:04.755378 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:15:04.755408 kubelet[2713]: I0129 12:15:04.755388 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:15:04.755408 kubelet[2713]: I0129 12:15:04.755405 2713 policy_none.go:49] "None policy: Start" Jan 29 12:15:04.756014 kubelet[2713]: I0129 12:15:04.755994 2713 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:15:04.756101 kubelet[2713]: I0129 12:15:04.756024 2713 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:15:04.756194 kubelet[2713]: I0129 12:15:04.756179 2713 state_mem.go:75] "Updated machine memory state" Jan 29 12:15:04.757971 kubelet[2713]: I0129 12:15:04.757728 2713 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:15:04.757971 kubelet[2713]: I0129 12:15:04.757914 2713 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:15:04.758059 kubelet[2713]: I0129 12:15:04.758014 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:15:04.811953 kubelet[2713]: I0129 12:15:04.811856 2713 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:15:04.819570 kubelet[2713]: I0129 12:15:04.819540 2713 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 12:15:04.819853 kubelet[2713]: I0129 12:15:04.819679 2713 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:15:04.821662 kubelet[2713]: I0129 12:15:04.821629 2713 topology_manager.go:215] "Topology Admit Handler" podUID="3985d4628b6218db675a056fb54aabfb" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:15:04.821830 kubelet[2713]: I0129 12:15:04.821776 2713 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:15:04.822253 kubelet[2713]: I0129 12:15:04.821868 2713 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:15:04.911191 kubelet[2713]: I0129 12:15:04.911158 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3985d4628b6218db675a056fb54aabfb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3985d4628b6218db675a056fb54aabfb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:15:04.911354 kubelet[2713]: I0129 12:15:04.911334 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:15:04.911599 kubelet[2713]: I0129 12:15:04.911444 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:15:04.911599 kubelet[2713]: I0129 12:15:04.911468 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:15:04.911599 kubelet[2713]: I0129 12:15:04.911487 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:15:04.911599 kubelet[2713]: I0129 12:15:04.911502 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3985d4628b6218db675a056fb54aabfb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3985d4628b6218db675a056fb54aabfb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:15:04.911599 kubelet[2713]: I0129 12:15:04.911521 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3985d4628b6218db675a056fb54aabfb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3985d4628b6218db675a056fb54aabfb\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:15:04.911717 kubelet[2713]: I0129 12:15:04.911538 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:15:04.911717 kubelet[2713]: I0129 12:15:04.911553 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:15:05.137396 kubelet[2713]: E0129 12:15:05.137277 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:05.137688 kubelet[2713]: E0129 12:15:05.137652 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:05.139087 kubelet[2713]: E0129 12:15:05.139029 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:05.284579 sudo[2747]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 12:15:05.284862 sudo[2747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 12:15:05.691239 kubelet[2713]: I0129 12:15:05.691207 2713 apiserver.go:52] "Watching apiserver" Jan 29 12:15:05.704169 sudo[2747]: pam_unix(sudo:session): session closed for user root Jan 29 12:15:05.711429 kubelet[2713]: I0129 12:15:05.709437 2713 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:15:05.731346 kubelet[2713]: E0129 12:15:05.730859 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:05.737454 kubelet[2713]: E0129 12:15:05.736563 2713 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 12:15:05.737454 kubelet[2713]: E0129 12:15:05.736566 2713 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 12:15:05.737454 kubelet[2713]: E0129 12:15:05.736812 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:05.741803 kubelet[2713]: E0129 12:15:05.740145 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:05.752252 kubelet[2713]: I0129 12:15:05.750564 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.750549554 podStartE2EDuration="1.750549554s" podCreationTimestamp="2025-01-29 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:15:05.748928184 +0000 UTC m=+1.110687867" watchObservedRunningTime="2025-01-29 12:15:05.750549554 +0000 UTC m=+1.112309237" Jan 29 12:15:05.756782 kubelet[2713]: I0129 12:15:05.756394 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.756382222 podStartE2EDuration="1.756382222s" podCreationTimestamp="2025-01-29 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:15:05.756256148 +0000 UTC m=+1.118015831" watchObservedRunningTime="2025-01-29 12:15:05.756382222 +0000 UTC m=+1.118141865" Jan 29 12:15:05.763380 kubelet[2713]: I0129 12:15:05.763266 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.763255365 podStartE2EDuration="1.763255365s" podCreationTimestamp="2025-01-29 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:15:05.763091932 +0000 UTC m=+1.124851615" watchObservedRunningTime="2025-01-29 12:15:05.763255365 +0000 UTC m=+1.125015048" Jan 29 12:15:06.734093 kubelet[2713]: E0129 12:15:06.733893 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:06.735735 kubelet[2713]: E0129 12:15:06.734481 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:06.735735 kubelet[2713]: E0129 12:15:06.734675 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:07.312333 sudo[1750]: pam_unix(sudo:session): session closed for user root Jan 29 12:15:07.313782 sshd[1744]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:07.316112 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:42434.service: Deactivated successfully. Jan 29 12:15:07.318914 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:15:07.319599 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:15:07.320722 systemd-logind[1526]: Removed session 7. Jan 29 12:15:07.735439 kubelet[2713]: E0129 12:15:07.735278 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:07.735439 kubelet[2713]: E0129 12:15:07.735332 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:13.113720 kubelet[2713]: E0129 12:15:13.113636 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:13.742842 kubelet[2713]: E0129 12:15:13.742800 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:14.744215 kubelet[2713]: E0129 12:15:14.744173 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:15.799525 kubelet[2713]: E0129 12:15:15.799479 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:16.331416 kubelet[2713]: E0129 12:15:16.331324 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:17.129235 update_engine[1529]: I20250129 12:15:17.129112 1529 update_attempter.cc:509] Updating boot flags... Jan 29 12:15:17.147096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2797) Jan 29 12:15:17.174579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2796) Jan 29 12:15:19.127763 kubelet[2713]: I0129 12:15:19.127737 2713 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:15:19.128568 containerd[1546]: time="2025-01-29T12:15:19.128527073Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:15:19.129462 kubelet[2713]: I0129 12:15:19.128980 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:15:20.024896 kubelet[2713]: I0129 12:15:20.024837 2713 topology_manager.go:215] "Topology Admit Handler" podUID="8949b4d1-32fd-408c-b256-611a42b15d86" podNamespace="kube-system" podName="kube-proxy-zw4hc" Jan 29 12:15:20.029323 kubelet[2713]: I0129 12:15:20.028000 2713 topology_manager.go:215] "Topology Admit Handler" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" podNamespace="kube-system" podName="cilium-w6qnk" Jan 29 12:15:20.113149 kubelet[2713]: I0129 12:15:20.113008 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vksk\" (UniqueName: \"kubernetes.io/projected/8949b4d1-32fd-408c-b256-611a42b15d86-kube-api-access-5vksk\") pod \"kube-proxy-zw4hc\" (UID: \"8949b4d1-32fd-408c-b256-611a42b15d86\") " pod="kube-system/kube-proxy-zw4hc" Jan 29 12:15:20.113149 kubelet[2713]: I0129 12:15:20.113051 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsc2s\" (UniqueName: \"kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-kube-api-access-gsc2s\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113149 kubelet[2713]: I0129 12:15:20.113075 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cni-path\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113149 kubelet[2713]: I0129 12:15:20.113111 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-xtables-lock\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113149 kubelet[2713]: I0129 12:15:20.113128 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-etc-cni-netd\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113149 kubelet[2713]: I0129 12:15:20.113143 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-lib-modules\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113382 kubelet[2713]: I0129 12:15:20.113172 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8949b4d1-32fd-408c-b256-611a42b15d86-xtables-lock\") pod \"kube-proxy-zw4hc\" (UID: \"8949b4d1-32fd-408c-b256-611a42b15d86\") " pod="kube-system/kube-proxy-zw4hc" Jan 29 12:15:20.113382 kubelet[2713]: I0129 12:15:20.113194 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-config-path\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113382 kubelet[2713]: I0129 12:15:20.113209 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/206a458d-f5da-4890-8d3a-8a905e1c67a2-clustermesh-secrets\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113382 kubelet[2713]: I0129 12:15:20.113243 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8949b4d1-32fd-408c-b256-611a42b15d86-kube-proxy\") pod \"kube-proxy-zw4hc\" (UID: \"8949b4d1-32fd-408c-b256-611a42b15d86\") " pod="kube-system/kube-proxy-zw4hc" Jan 29 12:15:20.113382 kubelet[2713]: I0129 12:15:20.113257 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-bpf-maps\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113382 kubelet[2713]: I0129 12:15:20.113279 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-net\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113509 kubelet[2713]: I0129 12:15:20.113297 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-kernel\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113509 kubelet[2713]: I0129 12:15:20.113351 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-hostproc\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113509 kubelet[2713]: I0129 12:15:20.113394 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-cgroup\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113509 kubelet[2713]: I0129 12:15:20.113426 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8949b4d1-32fd-408c-b256-611a42b15d86-lib-modules\") pod \"kube-proxy-zw4hc\" (UID: \"8949b4d1-32fd-408c-b256-611a42b15d86\") " pod="kube-system/kube-proxy-zw4hc" Jan 29 12:15:20.113509 kubelet[2713]: I0129 12:15:20.113447 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-run\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.113509 kubelet[2713]: I0129 12:15:20.113487 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-hubble-tls\") pod \"cilium-w6qnk\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " pod="kube-system/cilium-w6qnk" Jan 29 12:15:20.250570 kubelet[2713]: I0129 12:15:20.238824 2713 topology_manager.go:215] "Topology Admit Handler" podUID="270598e6-610f-4d04-ad7b-509d3e932f40" podNamespace="kube-system" podName="cilium-operator-599987898-fkz6b" Jan 29 12:15:20.315816 kubelet[2713]: I0129 12:15:20.315689 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtgx\" (UniqueName: \"kubernetes.io/projected/270598e6-610f-4d04-ad7b-509d3e932f40-kube-api-access-xvtgx\") pod \"cilium-operator-599987898-fkz6b\" (UID: \"270598e6-610f-4d04-ad7b-509d3e932f40\") " pod="kube-system/cilium-operator-599987898-fkz6b" Jan 29 12:15:20.315816 kubelet[2713]: I0129 12:15:20.315736 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/270598e6-610f-4d04-ad7b-509d3e932f40-cilium-config-path\") pod \"cilium-operator-599987898-fkz6b\" (UID: \"270598e6-610f-4d04-ad7b-509d3e932f40\") " pod="kube-system/cilium-operator-599987898-fkz6b" Jan 29 12:15:20.332048 kubelet[2713]: E0129 12:15:20.332015 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.333935 kubelet[2713]: E0129 12:15:20.333901 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.338045 containerd[1546]: time="2025-01-29T12:15:20.337985859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6qnk,Uid:206a458d-f5da-4890-8d3a-8a905e1c67a2,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:20.338351 containerd[1546]: time="2025-01-29T12:15:20.338241294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zw4hc,Uid:8949b4d1-32fd-408c-b256-611a42b15d86,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:20.358491 containerd[1546]: time="2025-01-29T12:15:20.357881270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:20.358628 containerd[1546]: time="2025-01-29T12:15:20.358459818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:20.358628 containerd[1546]: time="2025-01-29T12:15:20.358582816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:20.358868 containerd[1546]: time="2025-01-29T12:15:20.358818531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:20.363143 containerd[1546]: time="2025-01-29T12:15:20.362842733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:20.363143 containerd[1546]: time="2025-01-29T12:15:20.363107528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:20.363143 containerd[1546]: time="2025-01-29T12:15:20.363120327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:20.363256 containerd[1546]: time="2025-01-29T12:15:20.363203966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:20.390522 containerd[1546]: time="2025-01-29T12:15:20.390484392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6qnk,Uid:206a458d-f5da-4890-8d3a-8a905e1c67a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\"" Jan 29 12:15:20.394398 kubelet[2713]: E0129 12:15:20.394376 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.395366 containerd[1546]: time="2025-01-29T12:15:20.395294698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zw4hc,Uid:8949b4d1-32fd-408c-b256-611a42b15d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dae8be90e5cd5d8d80228fda51ef8640477c843b50765c178a89f994354bc95\"" Jan 29 12:15:20.396143 kubelet[2713]: E0129 12:15:20.396124 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.403398 containerd[1546]: time="2025-01-29T12:15:20.403365580Z" level=info msg="CreateContainer within sandbox \"3dae8be90e5cd5d8d80228fda51ef8640477c843b50765c178a89f994354bc95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:15:20.404205 containerd[1546]: time="2025-01-29T12:15:20.404175724Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:15:20.441834 containerd[1546]: time="2025-01-29T12:15:20.441728390Z" level=info msg="CreateContainer within sandbox \"3dae8be90e5cd5d8d80228fda51ef8640477c843b50765c178a89f994354bc95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b1e023124860cc492f53d19b339992f46bf81554eb61d6a8eb019a10218c5d4\"" Jan 29 12:15:20.444433 containerd[1546]: time="2025-01-29T12:15:20.444400978Z" level=info msg="StartContainer for \"0b1e023124860cc492f53d19b339992f46bf81554eb61d6a8eb019a10218c5d4\"" Jan 29 12:15:20.491741 containerd[1546]: time="2025-01-29T12:15:20.491700133Z" level=info msg="StartContainer for \"0b1e023124860cc492f53d19b339992f46bf81554eb61d6a8eb019a10218c5d4\" returns successfully" Jan 29 12:15:20.568565 kubelet[2713]: E0129 12:15:20.568461 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.569152 containerd[1546]: time="2025-01-29T12:15:20.569055900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fkz6b,Uid:270598e6-610f-4d04-ad7b-509d3e932f40,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:20.600306 containerd[1546]: time="2025-01-29T12:15:20.600147492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:20.600973 containerd[1546]: time="2025-01-29T12:15:20.600219491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:20.600973 containerd[1546]: time="2025-01-29T12:15:20.600727601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:20.600973 containerd[1546]: time="2025-01-29T12:15:20.600835959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:20.642474 containerd[1546]: time="2025-01-29T12:15:20.642399626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fkz6b,Uid:270598e6-610f-4d04-ad7b-509d3e932f40,Namespace:kube-system,Attempt:0,} returns sandbox id \"b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e\"" Jan 29 12:15:20.643092 kubelet[2713]: E0129 12:15:20.643059 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.759894 kubelet[2713]: E0129 12:15:20.759855 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:20.770098 kubelet[2713]: I0129 12:15:20.769564 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zw4hc" podStartSLOduration=0.769547659 podStartE2EDuration="769.547659ms" podCreationTimestamp="2025-01-29 12:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:15:20.768206045 +0000 UTC m=+16.129965728" watchObservedRunningTime="2025-01-29 12:15:20.769547659 +0000 UTC m=+16.131307342" Jan 29 12:15:23.034192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432205238.mount: Deactivated successfully. Jan 29 12:15:24.271208 containerd[1546]: time="2025-01-29T12:15:24.271158516Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:24.272180 containerd[1546]: time="2025-01-29T12:15:24.271638068Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 12:15:24.272675 containerd[1546]: time="2025-01-29T12:15:24.272647411Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:24.275217 containerd[1546]: time="2025-01-29T12:15:24.275121171Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.870905807s" Jan 29 12:15:24.275217 containerd[1546]: time="2025-01-29T12:15:24.275158410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 12:15:24.278210 containerd[1546]: time="2025-01-29T12:15:24.278116522Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:15:24.285294 containerd[1546]: time="2025-01-29T12:15:24.285148728Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:15:24.308216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381762024.mount: Deactivated successfully. Jan 29 12:15:24.312518 containerd[1546]: time="2025-01-29T12:15:24.312474604Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\"" Jan 29 12:15:24.313110 containerd[1546]: time="2025-01-29T12:15:24.312948196Z" level=info msg="StartContainer for \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\"" Jan 29 12:15:24.363740 containerd[1546]: time="2025-01-29T12:15:24.363635732Z" level=info msg="StartContainer for \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\" returns successfully" Jan 29 12:15:24.596483 containerd[1546]: time="2025-01-29T12:15:24.591345428Z" level=info msg="shim disconnected" id=e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e namespace=k8s.io Jan 29 12:15:24.596483 containerd[1546]: time="2025-01-29T12:15:24.596410666Z" level=warning msg="cleaning up after shim disconnected" id=e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e namespace=k8s.io Jan 29 12:15:24.596483 containerd[1546]: time="2025-01-29T12:15:24.596423786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:24.766142 kubelet[2713]: E0129 12:15:24.766040 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:24.768574 containerd[1546]: time="2025-01-29T12:15:24.768305831Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:15:24.784372 containerd[1546]: time="2025-01-29T12:15:24.784311610Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\"" Jan 29 12:15:24.785301 containerd[1546]: time="2025-01-29T12:15:24.785191676Z" level=info msg="StartContainer for \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\"" Jan 29 12:15:24.826380 containerd[1546]: time="2025-01-29T12:15:24.826340567Z" level=info msg="StartContainer for \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\" returns successfully" Jan 29 12:15:24.851407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:15:24.851689 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:24.851755 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:24.858373 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:24.871414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:24.873908 containerd[1546]: time="2025-01-29T12:15:24.873857034Z" level=info msg="shim disconnected" id=2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a namespace=k8s.io Jan 29 12:15:24.874069 containerd[1546]: time="2025-01-29T12:15:24.873908473Z" level=warning msg="cleaning up after shim disconnected" id=2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a namespace=k8s.io Jan 29 12:15:24.874069 containerd[1546]: time="2025-01-29T12:15:24.873918673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:25.306245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e-rootfs.mount: Deactivated successfully. Jan 29 12:15:25.395093 containerd[1546]: time="2025-01-29T12:15:25.394594200Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:25.395093 containerd[1546]: time="2025-01-29T12:15:25.395057553Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 12:15:25.395854 containerd[1546]: time="2025-01-29T12:15:25.395808901Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:25.397275 containerd[1546]: time="2025-01-29T12:15:25.397243639Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.119092317s" Jan 29 12:15:25.397330 containerd[1546]: time="2025-01-29T12:15:25.397275878Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 12:15:25.400233 containerd[1546]: time="2025-01-29T12:15:25.400126034Z" level=info msg="CreateContainer within sandbox \"b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:15:25.407784 containerd[1546]: time="2025-01-29T12:15:25.407737276Z" level=info msg="CreateContainer within sandbox \"b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\"" Jan 29 12:15:25.409123 containerd[1546]: time="2025-01-29T12:15:25.408103030Z" level=info msg="StartContainer for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\"" Jan 29 12:15:25.452341 containerd[1546]: time="2025-01-29T12:15:25.452294422Z" level=info msg="StartContainer for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" returns successfully" Jan 29 12:15:25.770375 kubelet[2713]: E0129 12:15:25.769802 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:25.776591 kubelet[2713]: E0129 12:15:25.776493 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:25.784670 containerd[1546]: time="2025-01-29T12:15:25.784603450Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:15:25.800171 kubelet[2713]: I0129 12:15:25.800107 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-fkz6b" podStartSLOduration=1.046707764 podStartE2EDuration="5.80000921s" podCreationTimestamp="2025-01-29 12:15:20 +0000 UTC" firstStartedPulling="2025-01-29 12:15:20.644693261 +0000 UTC m=+16.006452944" lastFinishedPulling="2025-01-29 12:15:25.397994707 +0000 UTC m=+20.759754390" observedRunningTime="2025-01-29 12:15:25.779557169 +0000 UTC m=+21.141316852" watchObservedRunningTime="2025-01-29 12:15:25.80000921 +0000 UTC m=+21.161768893" Jan 29 12:15:25.826899 containerd[1546]: time="2025-01-29T12:15:25.826835513Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\"" Jan 29 12:15:25.827419 containerd[1546]: time="2025-01-29T12:15:25.827382824Z" level=info msg="StartContainer for \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\"" Jan 29 12:15:25.901519 containerd[1546]: time="2025-01-29T12:15:25.901458631Z" level=info msg="StartContainer for \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\" returns successfully" Jan 29 12:15:26.033192 containerd[1546]: time="2025-01-29T12:15:26.030260085Z" level=info msg="shim disconnected" id=12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8 namespace=k8s.io Jan 29 12:15:26.033192 containerd[1546]: time="2025-01-29T12:15:26.030355204Z" level=warning msg="cleaning up after shim disconnected" id=12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8 namespace=k8s.io Jan 29 12:15:26.033192 containerd[1546]: time="2025-01-29T12:15:26.030372564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:26.780091 kubelet[2713]: E0129 12:15:26.780051 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:26.780461 kubelet[2713]: E0129 12:15:26.780119 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:26.791940 containerd[1546]: time="2025-01-29T12:15:26.791907090Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:15:26.804688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388009375.mount: Deactivated successfully. Jan 29 12:15:26.805579 containerd[1546]: time="2025-01-29T12:15:26.805419289Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\"" Jan 29 12:15:26.806151 containerd[1546]: time="2025-01-29T12:15:26.805829042Z" level=info msg="StartContainer for \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\"" Jan 29 12:15:26.848710 containerd[1546]: time="2025-01-29T12:15:26.848483886Z" level=info msg="StartContainer for \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\" returns successfully" Jan 29 12:15:26.866296 containerd[1546]: time="2025-01-29T12:15:26.866048185Z" level=info msg="shim disconnected" id=be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15 namespace=k8s.io Jan 29 12:15:26.866296 containerd[1546]: time="2025-01-29T12:15:26.866136143Z" level=warning msg="cleaning up after shim disconnected" id=be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15 namespace=k8s.io Jan 29 12:15:26.866296 containerd[1546]: time="2025-01-29T12:15:26.866150983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:27.305824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15-rootfs.mount: Deactivated successfully. Jan 29 12:15:27.784901 kubelet[2713]: E0129 12:15:27.784863 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:27.788606 containerd[1546]: time="2025-01-29T12:15:27.788090162Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:15:27.805022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020128817.mount: Deactivated successfully. Jan 29 12:15:27.805754 containerd[1546]: time="2025-01-29T12:15:27.805716870Z" level=info msg="CreateContainer within sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\"" Jan 29 12:15:27.806490 containerd[1546]: time="2025-01-29T12:15:27.806223062Z" level=info msg="StartContainer for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\"" Jan 29 12:15:27.863665 containerd[1546]: time="2025-01-29T12:15:27.863615282Z" level=info msg="StartContainer for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" returns successfully" Jan 29 12:15:28.053795 kubelet[2713]: I0129 12:15:28.053490 2713 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:15:28.083260 kubelet[2713]: I0129 12:15:28.081752 2713 topology_manager.go:215] "Topology Admit Handler" podUID="7fb83748-bc15-46b5-a6cb-b0926af19ed4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dfrmz" Jan 29 12:15:28.083260 kubelet[2713]: I0129 12:15:28.081929 2713 topology_manager.go:215] "Topology Admit Handler" podUID="766ac412-6b04-4689-847b-293d3ccd0da1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zhm6s" Jan 29 12:15:28.171557 kubelet[2713]: I0129 12:15:28.171405 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fb83748-bc15-46b5-a6cb-b0926af19ed4-config-volume\") pod \"coredns-7db6d8ff4d-dfrmz\" (UID: \"7fb83748-bc15-46b5-a6cb-b0926af19ed4\") " pod="kube-system/coredns-7db6d8ff4d-dfrmz" Jan 29 12:15:28.171557 kubelet[2713]: I0129 12:15:28.171448 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/766ac412-6b04-4689-847b-293d3ccd0da1-config-volume\") pod \"coredns-7db6d8ff4d-zhm6s\" (UID: \"766ac412-6b04-4689-847b-293d3ccd0da1\") " pod="kube-system/coredns-7db6d8ff4d-zhm6s" Jan 29 12:15:28.171557 kubelet[2713]: I0129 12:15:28.171470 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6cv\" (UniqueName: \"kubernetes.io/projected/766ac412-6b04-4689-847b-293d3ccd0da1-kube-api-access-md6cv\") pod \"coredns-7db6d8ff4d-zhm6s\" (UID: \"766ac412-6b04-4689-847b-293d3ccd0da1\") " pod="kube-system/coredns-7db6d8ff4d-zhm6s" Jan 29 12:15:28.171557 kubelet[2713]: I0129 12:15:28.171491 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fv7\" (UniqueName: \"kubernetes.io/projected/7fb83748-bc15-46b5-a6cb-b0926af19ed4-kube-api-access-b4fv7\") pod \"coredns-7db6d8ff4d-dfrmz\" (UID: \"7fb83748-bc15-46b5-a6cb-b0926af19ed4\") " pod="kube-system/coredns-7db6d8ff4d-dfrmz" Jan 29 12:15:28.388263 kubelet[2713]: E0129 12:15:28.387782 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:28.388263 kubelet[2713]: E0129 12:15:28.387826 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:28.391093 containerd[1546]: time="2025-01-29T12:15:28.390925289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhm6s,Uid:766ac412-6b04-4689-847b-293d3ccd0da1,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:28.391093 containerd[1546]: time="2025-01-29T12:15:28.390974688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dfrmz,Uid:7fb83748-bc15-46b5-a6cb-b0926af19ed4,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:28.794520 kubelet[2713]: E0129 12:15:28.794461 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:29.796261 kubelet[2713]: E0129 12:15:29.795803 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:29.994174 systemd-networkd[1230]: cilium_host: Link UP Jan 29 12:15:29.994363 systemd-networkd[1230]: cilium_net: Link UP Jan 29 12:15:29.994505 systemd-networkd[1230]: cilium_net: Gained carrier Jan 29 12:15:29.994622 systemd-networkd[1230]: cilium_host: Gained carrier Jan 29 12:15:30.100446 systemd-networkd[1230]: cilium_vxlan: Link UP Jan 29 12:15:30.100452 systemd-networkd[1230]: cilium_vxlan: Gained carrier Jan 29 12:15:30.207276 systemd-networkd[1230]: cilium_net: Gained IPv6LL Jan 29 12:15:30.478192 kernel: NET: Registered PF_ALG protocol family Jan 29 12:15:30.775191 systemd-networkd[1230]: cilium_host: Gained IPv6LL Jan 29 12:15:30.797549 kubelet[2713]: E0129 12:15:30.797456 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:31.048938 systemd-networkd[1230]: lxc_health: Link UP Jan 29 12:15:31.056229 systemd-networkd[1230]: lxc_health: Gained carrier Jan 29 12:15:31.588387 systemd-networkd[1230]: lxcf7673f840785: Link UP Jan 29 12:15:31.595118 kernel: eth0: renamed from tmp24c13 Jan 29 12:15:31.604919 systemd-networkd[1230]: lxcf7673f840785: Gained carrier Jan 29 12:15:31.607691 systemd-networkd[1230]: lxcc5ff9c4097f6: Link UP Jan 29 12:15:31.616119 kernel: eth0: renamed from tmp038d6 Jan 29 12:15:31.623621 systemd-networkd[1230]: lxcc5ff9c4097f6: Gained carrier Jan 29 12:15:31.863259 systemd-networkd[1230]: cilium_vxlan: Gained IPv6LL Jan 29 12:15:32.024355 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:34122.service - OpenSSH per-connection server daemon (10.0.0.1:34122). Jan 29 12:15:32.059453 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 34122 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:32.060798 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:32.066593 systemd-logind[1526]: New session 8 of user core. Jan 29 12:15:32.073351 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:15:32.204290 sshd[3936]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:32.210343 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:15:32.210473 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:34122.service: Deactivated successfully. Jan 29 12:15:32.212901 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:15:32.214698 systemd-logind[1526]: Removed session 8. Jan 29 12:15:32.352454 kubelet[2713]: E0129 12:15:32.352406 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:32.371972 kubelet[2713]: I0129 12:15:32.371883 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w6qnk" podStartSLOduration=8.493801793 podStartE2EDuration="12.37186751s" podCreationTimestamp="2025-01-29 12:15:20 +0000 UTC" firstStartedPulling="2025-01-29 12:15:20.399847849 +0000 UTC m=+15.761607532" lastFinishedPulling="2025-01-29 12:15:24.277913606 +0000 UTC m=+19.639673249" observedRunningTime="2025-01-29 12:15:28.808278243 +0000 UTC m=+24.170037966" watchObservedRunningTime="2025-01-29 12:15:32.37186751 +0000 UTC m=+27.733627193" Jan 29 12:15:33.079250 systemd-networkd[1230]: lxc_health: Gained IPv6LL Jan 29 12:15:33.271324 systemd-networkd[1230]: lxcc5ff9c4097f6: Gained IPv6LL Jan 29 12:15:33.463333 systemd-networkd[1230]: lxcf7673f840785: Gained IPv6LL Jan 29 12:15:35.110539 containerd[1546]: time="2025-01-29T12:15:35.110440836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:35.110539 containerd[1546]: time="2025-01-29T12:15:35.110501795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:35.110539 containerd[1546]: time="2025-01-29T12:15:35.110525715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:35.111624 containerd[1546]: time="2025-01-29T12:15:35.111500505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:35.115139 containerd[1546]: time="2025-01-29T12:15:35.113059088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:35.115139 containerd[1546]: time="2025-01-29T12:15:35.113132447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:35.115139 containerd[1546]: time="2025-01-29T12:15:35.113146007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:35.115139 containerd[1546]: time="2025-01-29T12:15:35.113228046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:35.133168 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:15:35.136578 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:15:35.155955 containerd[1546]: time="2025-01-29T12:15:35.155903595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dfrmz,Uid:7fb83748-bc15-46b5-a6cb-b0926af19ed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c13f504da4a825db767a46dfb3bd0fc4dfd1437dd6e4c47df3267858b5c264\"" Jan 29 12:15:35.157224 kubelet[2713]: E0129 12:15:35.157198 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:35.159000 containerd[1546]: time="2025-01-29T12:15:35.158244690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zhm6s,Uid:766ac412-6b04-4689-847b-293d3ccd0da1,Namespace:kube-system,Attempt:0,} returns sandbox id \"038d660763bd5c307e7fa1016bb2b0a71ff139c398e29adbb4a8182ce5174983\"" Jan 29 12:15:35.159629 kubelet[2713]: E0129 12:15:35.159596 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:35.161626 containerd[1546]: time="2025-01-29T12:15:35.161587015Z" level=info msg="CreateContainer within sandbox \"24c13f504da4a825db767a46dfb3bd0fc4dfd1437dd6e4c47df3267858b5c264\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:15:35.162049 containerd[1546]: time="2025-01-29T12:15:35.162010690Z" level=info msg="CreateContainer within sandbox \"038d660763bd5c307e7fa1016bb2b0a71ff139c398e29adbb4a8182ce5174983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:15:35.177764 containerd[1546]: time="2025-01-29T12:15:35.177691004Z" level=info msg="CreateContainer within sandbox \"24c13f504da4a825db767a46dfb3bd0fc4dfd1437dd6e4c47df3267858b5c264\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45634b0fbc0f7c59fb7185c750918fa71feadf3211448b40d9313f38bccbbf24\"" Jan 29 12:15:35.178420 containerd[1546]: time="2025-01-29T12:15:35.178271558Z" level=info msg="StartContainer for \"45634b0fbc0f7c59fb7185c750918fa71feadf3211448b40d9313f38bccbbf24\"" Jan 29 12:15:35.179530 containerd[1546]: time="2025-01-29T12:15:35.178957191Z" level=info msg="CreateContainer within sandbox \"038d660763bd5c307e7fa1016bb2b0a71ff139c398e29adbb4a8182ce5174983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f3680026eb5dc2b7f64c5094afe4fe822df1b0da9491074601fbe5b24733f66\"" Jan 29 12:15:35.179910 containerd[1546]: time="2025-01-29T12:15:35.179813422Z" level=info msg="StartContainer for \"5f3680026eb5dc2b7f64c5094afe4fe822df1b0da9491074601fbe5b24733f66\"" Jan 29 12:15:35.236222 containerd[1546]: time="2025-01-29T12:15:35.234999718Z" level=info msg="StartContainer for \"5f3680026eb5dc2b7f64c5094afe4fe822df1b0da9491074601fbe5b24733f66\" returns successfully" Jan 29 12:15:35.236222 containerd[1546]: time="2025-01-29T12:15:35.235070117Z" level=info msg="StartContainer for \"45634b0fbc0f7c59fb7185c750918fa71feadf3211448b40d9313f38bccbbf24\" returns successfully" Jan 29 12:15:35.816256 kubelet[2713]: E0129 12:15:35.816145 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:35.819339 kubelet[2713]: E0129 12:15:35.817930 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:35.826856 kubelet[2713]: I0129 12:15:35.826656 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zhm6s" podStartSLOduration=15.826639819 podStartE2EDuration="15.826639819s" podCreationTimestamp="2025-01-29 12:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:15:35.826489701 +0000 UTC m=+31.188249384" watchObservedRunningTime="2025-01-29 12:15:35.826639819 +0000 UTC m=+31.188399502" Jan 29 12:15:35.848991 kubelet[2713]: I0129 12:15:35.848462 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dfrmz" podStartSLOduration=15.848440828 podStartE2EDuration="15.848440828s" podCreationTimestamp="2025-01-29 12:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:15:35.846854805 +0000 UTC m=+31.208614488" watchObservedRunningTime="2025-01-29 12:15:35.848440828 +0000 UTC m=+31.210200511" Jan 29 12:15:36.822106 kubelet[2713]: E0129 12:15:36.819834 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:36.822106 kubelet[2713]: E0129 12:15:36.819933 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:37.213316 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:53936.service - OpenSSH per-connection server daemon (10.0.0.1:53936). Jan 29 12:15:37.249398 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 53936 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:37.250650 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:37.254892 systemd-logind[1526]: New session 9 of user core. Jan 29 12:15:37.266378 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:15:37.386107 sshd[4128]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:37.389388 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:53936.service: Deactivated successfully. Jan 29 12:15:37.391379 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:15:37.391516 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:15:37.392691 systemd-logind[1526]: Removed session 9. Jan 29 12:15:37.821693 kubelet[2713]: E0129 12:15:37.821348 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:37.822057 kubelet[2713]: E0129 12:15:37.822005 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:42.404327 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:53952.service - OpenSSH per-connection server daemon (10.0.0.1:53952). Jan 29 12:15:42.436974 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 53952 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:42.438163 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:42.442118 systemd-logind[1526]: New session 10 of user core. Jan 29 12:15:42.452462 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:15:42.562940 sshd[4144]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:42.569547 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:15:42.569840 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:53952.service: Deactivated successfully. Jan 29 12:15:42.571525 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:15:42.571976 systemd-logind[1526]: Removed session 10. Jan 29 12:15:43.751210 kubelet[2713]: I0129 12:15:43.751161 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:15:43.752402 kubelet[2713]: E0129 12:15:43.752227 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:43.838334 kubelet[2713]: E0129 12:15:43.838304 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:47.578293 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:57430.service - OpenSSH per-connection server daemon (10.0.0.1:57430). Jan 29 12:15:47.613563 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 57430 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:47.614707 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:47.618534 systemd-logind[1526]: New session 11 of user core. Jan 29 12:15:47.626276 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:15:47.734463 sshd[4160]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:47.743311 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Jan 29 12:15:47.743672 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:57430.service: Deactivated successfully. Jan 29 12:15:47.746461 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:15:47.746630 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:15:47.747823 systemd-logind[1526]: Removed session 11. Jan 29 12:15:47.775355 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:47.776647 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:47.780474 systemd-logind[1526]: New session 12 of user core. Jan 29 12:15:47.787310 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:15:47.928862 sshd[4173]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:47.936451 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:57456.service - OpenSSH per-connection server daemon (10.0.0.1:57456). Jan 29 12:15:47.936814 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:57442.service: Deactivated successfully. Jan 29 12:15:47.940752 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:15:47.943928 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:15:47.946757 systemd-logind[1526]: Removed session 12. Jan 29 12:15:47.980263 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 57456 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:47.981692 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:47.985666 systemd-logind[1526]: New session 13 of user core. Jan 29 12:15:47.999462 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:15:48.104772 sshd[4186]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:48.107798 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:57456.service: Deactivated successfully. Jan 29 12:15:48.109636 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:15:48.109703 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:15:48.110834 systemd-logind[1526]: Removed session 13. Jan 29 12:15:53.113306 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:45030.service - OpenSSH per-connection server daemon (10.0.0.1:45030). Jan 29 12:15:53.144923 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 45030 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:53.146134 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:53.150046 systemd-logind[1526]: New session 14 of user core. Jan 29 12:15:53.159301 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:15:53.264173 sshd[4207]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:53.266760 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:45030.service: Deactivated successfully. Jan 29 12:15:53.269297 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:15:53.269938 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:15:53.270920 systemd-logind[1526]: Removed session 14. Jan 29 12:15:58.280311 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:45032.service - OpenSSH per-connection server daemon (10.0.0.1:45032). Jan 29 12:15:58.311866 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 45032 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:58.313115 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:58.317239 systemd-logind[1526]: New session 15 of user core. Jan 29 12:15:58.327318 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:15:58.432839 sshd[4223]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:58.439290 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:45036.service - OpenSSH per-connection server daemon (10.0.0.1:45036). Jan 29 12:15:58.439663 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:45032.service: Deactivated successfully. Jan 29 12:15:58.442181 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:15:58.443108 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:15:58.443956 systemd-logind[1526]: Removed session 15. Jan 29 12:15:58.471624 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 45036 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:58.472785 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:58.476952 systemd-logind[1526]: New session 16 of user core. Jan 29 12:15:58.482316 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:15:58.678538 sshd[4235]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:58.691303 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:45052.service - OpenSSH per-connection server daemon (10.0.0.1:45052). Jan 29 12:15:58.692472 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:45036.service: Deactivated successfully. Jan 29 12:15:58.694372 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:15:58.695987 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:15:58.696826 systemd-logind[1526]: Removed session 16. Jan 29 12:15:58.725846 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 45052 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:58.727053 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:58.730920 systemd-logind[1526]: New session 17 of user core. Jan 29 12:15:58.741319 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:15:59.954699 sshd[4249]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:59.966378 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:45054.service - OpenSSH per-connection server daemon (10.0.0.1:45054). Jan 29 12:15:59.966772 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:45052.service: Deactivated successfully. Jan 29 12:15:59.971125 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:15:59.973147 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:15:59.976743 systemd-logind[1526]: Removed session 17. Jan 29 12:16:00.003976 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 45054 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:00.005457 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:00.009889 systemd-logind[1526]: New session 18 of user core. Jan 29 12:16:00.021349 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:16:00.223401 sshd[4268]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:00.233351 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:45068.service - OpenSSH per-connection server daemon (10.0.0.1:45068). Jan 29 12:16:00.234408 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:45054.service: Deactivated successfully. Jan 29 12:16:00.239144 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:16:00.240451 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:16:00.244514 systemd-logind[1526]: Removed session 18. Jan 29 12:16:00.266644 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 45068 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:00.267905 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:00.272277 systemd-logind[1526]: New session 19 of user core. Jan 29 12:16:00.281381 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:16:00.391472 sshd[4283]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:00.394115 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:45068.service: Deactivated successfully. Jan 29 12:16:00.397298 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:16:00.397442 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:16:00.398327 systemd-logind[1526]: Removed session 19. Jan 29 12:16:05.410360 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:38508.service - OpenSSH per-connection server daemon (10.0.0.1:38508). Jan 29 12:16:05.442047 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 38508 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:05.443441 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:05.447144 systemd-logind[1526]: New session 20 of user core. Jan 29 12:16:05.457320 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:16:05.570147 sshd[4306]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:05.573521 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:38508.service: Deactivated successfully. Jan 29 12:16:05.575679 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:16:05.576183 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:16:05.578803 systemd-logind[1526]: Removed session 20. Jan 29 12:16:10.580289 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:38516.service - OpenSSH per-connection server daemon (10.0.0.1:38516). Jan 29 12:16:10.611542 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 38516 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:10.612745 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:10.616923 systemd-logind[1526]: New session 21 of user core. Jan 29 12:16:10.624052 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:16:10.730612 sshd[4321]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:10.734497 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:38516.service: Deactivated successfully. Jan 29 12:16:10.737452 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:16:10.739716 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:16:10.740641 systemd-logind[1526]: Removed session 21. Jan 29 12:16:15.742332 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:57080.service - OpenSSH per-connection server daemon (10.0.0.1:57080). Jan 29 12:16:15.773582 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 57080 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:15.774819 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:15.778865 systemd-logind[1526]: New session 22 of user core. Jan 29 12:16:15.789342 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:16:15.896892 sshd[4337]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:15.905381 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:57086.service - OpenSSH per-connection server daemon (10.0.0.1:57086). Jan 29 12:16:15.905854 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:57080.service: Deactivated successfully. Jan 29 12:16:15.907569 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:16:15.908998 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:16:15.910005 systemd-logind[1526]: Removed session 22. Jan 29 12:16:15.937738 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 57086 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:15.939019 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:15.942889 systemd-logind[1526]: New session 23 of user core. Jan 29 12:16:15.952381 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:16:17.592103 containerd[1546]: time="2025-01-29T12:16:17.592042000Z" level=info msg="StopContainer for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" with timeout 30 (s)" Jan 29 12:16:17.592485 containerd[1546]: time="2025-01-29T12:16:17.592397048Z" level=info msg="Stop container \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" with signal terminated" Jan 29 12:16:17.621699 containerd[1546]: time="2025-01-29T12:16:17.621625623Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:16:17.622303 containerd[1546]: time="2025-01-29T12:16:17.622271717Z" level=info msg="StopContainer for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" with timeout 2 (s)" Jan 29 12:16:17.622595 containerd[1546]: time="2025-01-29T12:16:17.622530083Z" level=info msg="Stop container \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" with signal terminated" Jan 29 12:16:17.622585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005-rootfs.mount: Deactivated successfully. Jan 29 12:16:17.628695 systemd-networkd[1230]: lxc_health: Link DOWN Jan 29 12:16:17.629369 containerd[1546]: time="2025-01-29T12:16:17.628718102Z" level=info msg="shim disconnected" id=79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005 namespace=k8s.io Jan 29 12:16:17.629369 containerd[1546]: time="2025-01-29T12:16:17.628766023Z" level=warning msg="cleaning up after shim disconnected" id=79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005 namespace=k8s.io Jan 29 12:16:17.629369 containerd[1546]: time="2025-01-29T12:16:17.628776743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:17.628702 systemd-networkd[1230]: lxc_health: Lost carrier Jan 29 12:16:17.665213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a-rootfs.mount: Deactivated successfully. Jan 29 12:16:17.673140 containerd[1546]: time="2025-01-29T12:16:17.673063895Z" level=info msg="shim disconnected" id=61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a namespace=k8s.io Jan 29 12:16:17.673140 containerd[1546]: time="2025-01-29T12:16:17.673134577Z" level=warning msg="cleaning up after shim disconnected" id=61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a namespace=k8s.io Jan 29 12:16:17.673140 containerd[1546]: time="2025-01-29T12:16:17.673143057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:17.674353 containerd[1546]: time="2025-01-29T12:16:17.674321044Z" level=info msg="StopContainer for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" returns successfully" Jan 29 12:16:17.675003 containerd[1546]: time="2025-01-29T12:16:17.674964298Z" level=info msg="StopPodSandbox for \"b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e\"" Jan 29 12:16:17.675041 containerd[1546]: time="2025-01-29T12:16:17.675006299Z" level=info msg="Container to stop \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.676857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e-shm.mount: Deactivated successfully. Jan 29 12:16:17.699725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e-rootfs.mount: Deactivated successfully. Jan 29 12:16:17.705545 containerd[1546]: time="2025-01-29T12:16:17.705413180Z" level=info msg="StopContainer for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" returns successfully" Jan 29 12:16:17.706303 containerd[1546]: time="2025-01-29T12:16:17.706273760Z" level=info msg="StopPodSandbox for \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\"" Jan 29 12:16:17.706411 containerd[1546]: time="2025-01-29T12:16:17.706319761Z" level=info msg="Container to stop \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.706411 containerd[1546]: time="2025-01-29T12:16:17.706332481Z" level=info msg="Container to stop \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.706411 containerd[1546]: time="2025-01-29T12:16:17.706341401Z" level=info msg="Container to stop \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.706411 containerd[1546]: time="2025-01-29T12:16:17.706350321Z" level=info msg="Container to stop \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.706411 containerd[1546]: time="2025-01-29T12:16:17.706359122Z" level=info msg="Container to stop \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.710031 containerd[1546]: time="2025-01-29T12:16:17.709985363Z" level=info msg="shim disconnected" id=b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e namespace=k8s.io Jan 29 12:16:17.710031 containerd[1546]: time="2025-01-29T12:16:17.710026884Z" level=warning msg="cleaning up after shim disconnected" id=b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e namespace=k8s.io Jan 29 12:16:17.710031 containerd[1546]: time="2025-01-29T12:16:17.710035284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:17.726931 containerd[1546]: time="2025-01-29T12:16:17.726884942Z" level=info msg="TearDown network for sandbox \"b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e\" successfully" Jan 29 12:16:17.726931 containerd[1546]: time="2025-01-29T12:16:17.726921022Z" level=info msg="StopPodSandbox for \"b270fe77b010e821384cc420f72822a958da547d3c7d3c487a90ec95fea93b7e\" returns successfully" Jan 29 12:16:17.744054 containerd[1546]: time="2025-01-29T12:16:17.743966165Z" level=info msg="shim disconnected" id=0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321 namespace=k8s.io Jan 29 12:16:17.744054 containerd[1546]: time="2025-01-29T12:16:17.744034806Z" level=warning msg="cleaning up after shim disconnected" id=0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321 namespace=k8s.io Jan 29 12:16:17.744412 containerd[1546]: time="2025-01-29T12:16:17.744263531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:17.754570 containerd[1546]: time="2025-01-29T12:16:17.754529161Z" level=info msg="TearDown network for sandbox \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" successfully" Jan 29 12:16:17.754570 containerd[1546]: time="2025-01-29T12:16:17.754565122Z" level=info msg="StopPodSandbox for \"0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321\" returns successfully" Jan 29 12:16:17.756274 kubelet[2713]: I0129 12:16:17.756240 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/270598e6-610f-4d04-ad7b-509d3e932f40-cilium-config-path\") pod \"270598e6-610f-4d04-ad7b-509d3e932f40\" (UID: \"270598e6-610f-4d04-ad7b-509d3e932f40\") " Jan 29 12:16:17.756274 kubelet[2713]: I0129 12:16:17.756278 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtgx\" (UniqueName: \"kubernetes.io/projected/270598e6-610f-4d04-ad7b-509d3e932f40-kube-api-access-xvtgx\") pod \"270598e6-610f-4d04-ad7b-509d3e932f40\" (UID: \"270598e6-610f-4d04-ad7b-509d3e932f40\") " Jan 29 12:16:17.768638 kubelet[2713]: I0129 12:16:17.768600 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270598e6-610f-4d04-ad7b-509d3e932f40-kube-api-access-xvtgx" (OuterVolumeSpecName: "kube-api-access-xvtgx") pod "270598e6-610f-4d04-ad7b-509d3e932f40" (UID: "270598e6-610f-4d04-ad7b-509d3e932f40"). InnerVolumeSpecName "kube-api-access-xvtgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:17.768638 kubelet[2713]: I0129 12:16:17.768620 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/270598e6-610f-4d04-ad7b-509d3e932f40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "270598e6-610f-4d04-ad7b-509d3e932f40" (UID: "270598e6-610f-4d04-ad7b-509d3e932f40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:17.856918 kubelet[2713]: I0129 12:16:17.856740 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-net\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.856918 kubelet[2713]: I0129 12:16:17.856784 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-hostproc\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.856918 kubelet[2713]: I0129 12:16:17.856801 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-bpf-maps\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.856918 kubelet[2713]: I0129 12:16:17.856818 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-cgroup\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.856918 kubelet[2713]: I0129 12:16:17.856837 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.856918 kubelet[2713]: I0129 12:16:17.856860 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsc2s\" (UniqueName: \"kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-kube-api-access-gsc2s\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857163 kubelet[2713]: I0129 12:16:17.856914 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-config-path\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857163 kubelet[2713]: I0129 12:16:17.856933 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/206a458d-f5da-4890-8d3a-8a905e1c67a2-clustermesh-secrets\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857163 kubelet[2713]: I0129 12:16:17.856951 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-kernel\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857163 kubelet[2713]: I0129 12:16:17.856968 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-xtables-lock\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857163 kubelet[2713]: I0129 12:16:17.856982 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-etc-cni-netd\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857163 kubelet[2713]: I0129 12:16:17.856997 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-lib-modules\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857292 kubelet[2713]: I0129 12:16:17.857012 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cni-path\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857292 kubelet[2713]: I0129 12:16:17.857025 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-run\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857292 kubelet[2713]: I0129 12:16:17.857043 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-hubble-tls\") pod \"206a458d-f5da-4890-8d3a-8a905e1c67a2\" (UID: \"206a458d-f5da-4890-8d3a-8a905e1c67a2\") " Jan 29 12:16:17.857292 kubelet[2713]: I0129 12:16:17.857098 2713 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xvtgx\" (UniqueName: \"kubernetes.io/projected/270598e6-610f-4d04-ad7b-509d3e932f40-kube-api-access-xvtgx\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.857292 kubelet[2713]: I0129 12:16:17.857111 2713 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.857292 kubelet[2713]: I0129 12:16:17.857121 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/270598e6-610f-4d04-ad7b-509d3e932f40-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.857460 kubelet[2713]: I0129 12:16:17.857373 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.857460 kubelet[2713]: I0129 12:16:17.857393 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.857460 kubelet[2713]: I0129 12:16:17.857428 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.857460 kubelet[2713]: I0129 12:16:17.857443 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cni-path" (OuterVolumeSpecName: "cni-path") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.859724 kubelet[2713]: I0129 12:16:17.859290 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.859724 kubelet[2713]: I0129 12:16:17.859329 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.859724 kubelet[2713]: I0129 12:16:17.859347 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.859724 kubelet[2713]: I0129 12:16:17.859363 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-hostproc" (OuterVolumeSpecName: "hostproc") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.859921 kubelet[2713]: I0129 12:16:17.859344 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.859975 kubelet[2713]: I0129 12:16:17.859939 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206a458d-f5da-4890-8d3a-8a905e1c67a2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:16:17.860180 kubelet[2713]: I0129 12:16:17.860117 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:17.860245 kubelet[2713]: I0129 12:16:17.860185 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-kube-api-access-gsc2s" (OuterVolumeSpecName: "kube-api-access-gsc2s") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "kube-api-access-gsc2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:17.861572 kubelet[2713]: I0129 12:16:17.861537 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "206a458d-f5da-4890-8d3a-8a905e1c67a2" (UID: "206a458d-f5da-4890-8d3a-8a905e1c67a2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:17.898254 kubelet[2713]: I0129 12:16:17.898219 2713 scope.go:117] "RemoveContainer" containerID="79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005" Jan 29 12:16:17.899286 containerd[1546]: time="2025-01-29T12:16:17.899247165Z" level=info msg="RemoveContainer for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\"" Jan 29 12:16:17.912724 containerd[1546]: time="2025-01-29T12:16:17.912682146Z" level=info msg="RemoveContainer for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" returns successfully" Jan 29 12:16:17.912995 kubelet[2713]: I0129 12:16:17.912959 2713 scope.go:117] "RemoveContainer" containerID="79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005" Jan 29 12:16:17.913261 containerd[1546]: time="2025-01-29T12:16:17.913212598Z" level=error msg="ContainerStatus for \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\": not found" Jan 29 12:16:17.914966 kubelet[2713]: E0129 12:16:17.914912 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\": not found" containerID="79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005" Jan 29 12:16:17.915048 kubelet[2713]: I0129 12:16:17.914959 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005"} err="failed to get container status \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\": rpc error: code = NotFound desc = an error occurred when try to find container \"79ec184015e82c2d7a78ff5916acfa7769c584e4b175cfe6991ae71401cfb005\": not found" Jan 29 12:16:17.915048 kubelet[2713]: I0129 12:16:17.915034 2713 scope.go:117] "RemoveContainer" containerID="61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a" Jan 29 12:16:17.916208 containerd[1546]: time="2025-01-29T12:16:17.916171424Z" level=info msg="RemoveContainer for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\"" Jan 29 12:16:17.921629 containerd[1546]: time="2025-01-29T12:16:17.921593466Z" level=info msg="RemoveContainer for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" returns successfully" Jan 29 12:16:17.921884 kubelet[2713]: I0129 12:16:17.921802 2713 scope.go:117] "RemoveContainer" containerID="be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15" Jan 29 12:16:17.924263 containerd[1546]: time="2025-01-29T12:16:17.924019760Z" level=info msg="RemoveContainer for \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\"" Jan 29 12:16:17.932168 containerd[1546]: time="2025-01-29T12:16:17.932127902Z" level=info msg="RemoveContainer for \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\" returns successfully" Jan 29 12:16:17.932335 kubelet[2713]: I0129 12:16:17.932302 2713 scope.go:117] "RemoveContainer" containerID="12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8" Jan 29 12:16:17.933350 containerd[1546]: time="2025-01-29T12:16:17.933326968Z" level=info msg="RemoveContainer for \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\"" Jan 29 12:16:17.935657 containerd[1546]: time="2025-01-29T12:16:17.935622540Z" level=info msg="RemoveContainer for \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\" returns successfully" Jan 29 12:16:17.935821 kubelet[2713]: I0129 12:16:17.935790 2713 scope.go:117] "RemoveContainer" containerID="2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a" Jan 29 12:16:17.936775 containerd[1546]: time="2025-01-29T12:16:17.936749125Z" level=info msg="RemoveContainer for \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\"" Jan 29 12:16:17.938893 containerd[1546]: time="2025-01-29T12:16:17.938867413Z" level=info msg="RemoveContainer for \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\" returns successfully" Jan 29 12:16:17.939058 kubelet[2713]: I0129 12:16:17.939025 2713 scope.go:117] "RemoveContainer" containerID="e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e" Jan 29 12:16:17.940012 containerd[1546]: time="2025-01-29T12:16:17.939990758Z" level=info msg="RemoveContainer for \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\"" Jan 29 12:16:17.942350 containerd[1546]: time="2025-01-29T12:16:17.942290929Z" level=info msg="RemoveContainer for \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\" returns successfully" Jan 29 12:16:17.942749 kubelet[2713]: I0129 12:16:17.942508 2713 scope.go:117] "RemoveContainer" containerID="61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a" Jan 29 12:16:17.942811 containerd[1546]: time="2025-01-29T12:16:17.942687138Z" level=error msg="ContainerStatus for \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\": not found" Jan 29 12:16:17.943101 kubelet[2713]: E0129 12:16:17.942921 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\": not found" containerID="61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a" Jan 29 12:16:17.943101 kubelet[2713]: I0129 12:16:17.943018 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a"} err="failed to get container status \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"61a6ce155eb5e50ec239071c922bed562c9a9ef9db0eacd96c635c8fac4b2c8a\": not found" Jan 29 12:16:17.943101 kubelet[2713]: I0129 12:16:17.943039 2713 scope.go:117] "RemoveContainer" containerID="be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15" Jan 29 12:16:17.943520 containerd[1546]: time="2025-01-29T12:16:17.943457716Z" level=error msg="ContainerStatus for \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\": not found" Jan 29 12:16:17.943600 kubelet[2713]: E0129 12:16:17.943566 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\": not found" containerID="be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15" Jan 29 12:16:17.943600 kubelet[2713]: I0129 12:16:17.943590 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15"} err="failed to get container status \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\": rpc error: code = NotFound desc = an error occurred when try to find container \"be4e11343612971218f06a27e1fb1892770ee530c2a45362b9670e314fdf0d15\": not found" Jan 29 12:16:17.943656 kubelet[2713]: I0129 12:16:17.943606 2713 scope.go:117] "RemoveContainer" containerID="12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8" Jan 29 12:16:17.943860 containerd[1546]: time="2025-01-29T12:16:17.943758642Z" level=error msg="ContainerStatus for \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\": not found" Jan 29 12:16:17.944070 kubelet[2713]: E0129 12:16:17.943945 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\": not found" containerID="12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8" Jan 29 12:16:17.944070 kubelet[2713]: I0129 12:16:17.943995 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8"} err="failed to get container status \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"12ab472ee025875c6c47e2262e0028477d2f7cee3adb12b6d2a2fa44e5e804a8\": not found" Jan 29 12:16:17.944070 kubelet[2713]: I0129 12:16:17.944011 2713 scope.go:117] "RemoveContainer" containerID="2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a" Jan 29 12:16:17.944517 containerd[1546]: time="2025-01-29T12:16:17.944289734Z" level=error msg="ContainerStatus for \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\": not found" Jan 29 12:16:17.944569 kubelet[2713]: E0129 12:16:17.944414 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\": not found" containerID="2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a" Jan 29 12:16:17.944569 kubelet[2713]: I0129 12:16:17.944434 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a"} err="failed to get container status \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b021558e6e50f300656960853fdf469629de9fbcd517adaabc591de3864472a\": not found" Jan 29 12:16:17.944569 kubelet[2713]: I0129 12:16:17.944461 2713 scope.go:117] "RemoveContainer" containerID="e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e" Jan 29 12:16:17.944945 containerd[1546]: time="2025-01-29T12:16:17.944826346Z" level=error msg="ContainerStatus for \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\": not found" Jan 29 12:16:17.944994 kubelet[2713]: E0129 12:16:17.944942 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\": not found" containerID="e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e" Jan 29 12:16:17.944994 kubelet[2713]: I0129 12:16:17.944962 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e"} err="failed to get container status \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e225c3f4d18eb89aa2894fe5692327f0757fd8ad3e4c9bf440b24d0b9220ce6e\": not found" Jan 29 12:16:17.957333 kubelet[2713]: I0129 12:16:17.957274 2713 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957406 kubelet[2713]: I0129 12:16:17.957338 2713 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957406 kubelet[2713]: I0129 12:16:17.957356 2713 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957406 kubelet[2713]: I0129 12:16:17.957379 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957406 kubelet[2713]: I0129 12:16:17.957388 2713 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gsc2s\" (UniqueName: \"kubernetes.io/projected/206a458d-f5da-4890-8d3a-8a905e1c67a2-kube-api-access-gsc2s\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957406 kubelet[2713]: I0129 12:16:17.957397 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957406 kubelet[2713]: I0129 12:16:17.957404 2713 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/206a458d-f5da-4890-8d3a-8a905e1c67a2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957536 kubelet[2713]: I0129 12:16:17.957412 2713 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957536 kubelet[2713]: I0129 12:16:17.957420 2713 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957536 kubelet[2713]: I0129 12:16:17.957427 2713 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957536 kubelet[2713]: I0129 12:16:17.957434 2713 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957536 kubelet[2713]: I0129 12:16:17.957442 2713 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:17.957536 kubelet[2713]: I0129 12:16:17.957456 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/206a458d-f5da-4890-8d3a-8a905e1c67a2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 12:16:18.601364 systemd[1]: var-lib-kubelet-pods-270598e6\x2d610f\x2d4d04\x2dad7b\x2d509d3e932f40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxvtgx.mount: Deactivated successfully. Jan 29 12:16:18.601518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321-rootfs.mount: Deactivated successfully. Jan 29 12:16:18.601604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a50f1c4ed20ea0a14eb52e05c69c842da13f16dd74051860ffaee092b60f321-shm.mount: Deactivated successfully. Jan 29 12:16:18.601688 systemd[1]: var-lib-kubelet-pods-206a458d\x2df5da\x2d4890\x2d8d3a\x2d8a905e1c67a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgsc2s.mount: Deactivated successfully. Jan 29 12:16:18.601780 systemd[1]: var-lib-kubelet-pods-206a458d\x2df5da\x2d4890\x2d8d3a\x2d8a905e1c67a2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:16:18.601855 systemd[1]: var-lib-kubelet-pods-206a458d\x2df5da\x2d4890\x2d8d3a\x2d8a905e1c67a2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:16:18.726047 kubelet[2713]: I0129 12:16:18.726012 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" path="/var/lib/kubelet/pods/206a458d-f5da-4890-8d3a-8a905e1c67a2/volumes" Jan 29 12:16:18.728108 kubelet[2713]: I0129 12:16:18.727607 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270598e6-610f-4d04-ad7b-509d3e932f40" path="/var/lib/kubelet/pods/270598e6-610f-4d04-ad7b-509d3e932f40/volumes" Jan 29 12:16:19.524996 sshd[4350]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:19.535312 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:57098.service - OpenSSH per-connection server daemon (10.0.0.1:57098). Jan 29 12:16:19.536200 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:57086.service: Deactivated successfully. Jan 29 12:16:19.538092 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:16:19.540002 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:16:19.540969 systemd-logind[1526]: Removed session 23. Jan 29 12:16:19.569229 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 57098 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:19.570569 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:19.575110 systemd-logind[1526]: New session 24 of user core. Jan 29 12:16:19.585390 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:16:19.772497 kubelet[2713]: E0129 12:16:19.772449 2713 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:16:20.436067 sshd[4520]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:20.446970 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:57114.service - OpenSSH per-connection server daemon (10.0.0.1:57114). Jan 29 12:16:20.450172 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:57098.service: Deactivated successfully. Jan 29 12:16:20.455040 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:16:20.462056 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:16:20.466854 systemd-logind[1526]: Removed session 24. Jan 29 12:16:20.467622 kubelet[2713]: I0129 12:16:20.467537 2713 topology_manager.go:215] "Topology Admit Handler" podUID="5a674fbd-323b-46a8-b99d-c32947b96767" podNamespace="kube-system" podName="cilium-lh4bz" Jan 29 12:16:20.467999 kubelet[2713]: E0129 12:16:20.467731 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" containerName="apply-sysctl-overwrites" Jan 29 12:16:20.467999 kubelet[2713]: E0129 12:16:20.467747 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" containerName="mount-bpf-fs" Jan 29 12:16:20.467999 kubelet[2713]: E0129 12:16:20.467754 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="270598e6-610f-4d04-ad7b-509d3e932f40" containerName="cilium-operator" Jan 29 12:16:20.467999 kubelet[2713]: E0129 12:16:20.467760 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" containerName="clean-cilium-state" Jan 29 12:16:20.467999 kubelet[2713]: E0129 12:16:20.467766 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" containerName="cilium-agent" Jan 29 12:16:20.467999 kubelet[2713]: E0129 12:16:20.467771 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" containerName="mount-cgroup" Jan 29 12:16:20.467999 kubelet[2713]: I0129 12:16:20.467791 2713 memory_manager.go:354] "RemoveStaleState removing state" podUID="206a458d-f5da-4890-8d3a-8a905e1c67a2" containerName="cilium-agent" Jan 29 12:16:20.467999 kubelet[2713]: I0129 12:16:20.467797 2713 memory_manager.go:354] "RemoveStaleState removing state" podUID="270598e6-610f-4d04-ad7b-509d3e932f40" containerName="cilium-operator" Jan 29 12:16:20.502433 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 57114 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:20.503658 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:20.507615 systemd-logind[1526]: New session 25 of user core. Jan 29 12:16:20.517395 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:16:20.567043 sshd[4534]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:20.569829 kubelet[2713]: I0129 12:16:20.569479 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-host-proc-sys-kernel\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.569829 kubelet[2713]: I0129 12:16:20.569523 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a674fbd-323b-46a8-b99d-c32947b96767-clustermesh-secrets\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.569829 kubelet[2713]: I0129 12:16:20.569541 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-cni-path\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.569829 kubelet[2713]: I0129 12:16:20.569559 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5a674fbd-323b-46a8-b99d-c32947b96767-cilium-ipsec-secrets\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.569829 kubelet[2713]: I0129 12:16:20.569576 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a674fbd-323b-46a8-b99d-c32947b96767-hubble-tls\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570018 kubelet[2713]: I0129 12:16:20.569593 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnsf\" (UniqueName: \"kubernetes.io/projected/5a674fbd-323b-46a8-b99d-c32947b96767-kube-api-access-gsnsf\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570018 kubelet[2713]: I0129 12:16:20.569610 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a674fbd-323b-46a8-b99d-c32947b96767-cilium-config-path\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570018 kubelet[2713]: I0129 12:16:20.569627 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-cilium-run\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570018 kubelet[2713]: I0129 12:16:20.569643 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-etc-cni-netd\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570018 kubelet[2713]: I0129 12:16:20.569683 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-lib-modules\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570018 kubelet[2713]: I0129 12:16:20.569733 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-xtables-lock\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570208 kubelet[2713]: I0129 12:16:20.569754 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-bpf-maps\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570208 kubelet[2713]: I0129 12:16:20.569775 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-hostproc\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570208 kubelet[2713]: I0129 12:16:20.569796 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-cilium-cgroup\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.570208 kubelet[2713]: I0129 12:16:20.569813 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a674fbd-323b-46a8-b99d-c32947b96767-host-proc-sys-net\") pod \"cilium-lh4bz\" (UID: \"5a674fbd-323b-46a8-b99d-c32947b96767\") " pod="kube-system/cilium-lh4bz" Jan 29 12:16:20.575354 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:57118.service - OpenSSH per-connection server daemon (10.0.0.1:57118). Jan 29 12:16:20.575753 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:57114.service: Deactivated successfully. Jan 29 12:16:20.578790 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:16:20.579415 systemd-logind[1526]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:16:20.580659 systemd-logind[1526]: Removed session 25. Jan 29 12:16:20.607246 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:16:20.608386 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:16:20.613153 systemd-logind[1526]: New session 26 of user core. Jan 29 12:16:20.620416 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:16:20.777777 kubelet[2713]: E0129 12:16:20.777656 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:20.779533 containerd[1546]: time="2025-01-29T12:16:20.779479245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lh4bz,Uid:5a674fbd-323b-46a8-b99d-c32947b96767,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:20.799163 containerd[1546]: time="2025-01-29T12:16:20.799056082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:20.799163 containerd[1546]: time="2025-01-29T12:16:20.799128403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:20.799163 containerd[1546]: time="2025-01-29T12:16:20.799140284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:20.799371 containerd[1546]: time="2025-01-29T12:16:20.799234646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:20.829151 containerd[1546]: time="2025-01-29T12:16:20.829057330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lh4bz,Uid:5a674fbd-323b-46a8-b99d-c32947b96767,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\"" Jan 29 12:16:20.830052 kubelet[2713]: E0129 12:16:20.829875 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:20.834260 containerd[1546]: time="2025-01-29T12:16:20.834224034Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:16:20.904013 containerd[1546]: time="2025-01-29T12:16:20.903944487Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a37ea61fbe8ba26b4588416bbe2968247ce2021682b4d5ba5d414ce520396ee\"" Jan 29 12:16:20.904848 containerd[1546]: time="2025-01-29T12:16:20.904742983Z" level=info msg="StartContainer for \"3a37ea61fbe8ba26b4588416bbe2968247ce2021682b4d5ba5d414ce520396ee\"" Jan 29 12:16:20.948417 containerd[1546]: time="2025-01-29T12:16:20.948373787Z" level=info msg="StartContainer for \"3a37ea61fbe8ba26b4588416bbe2968247ce2021682b4d5ba5d414ce520396ee\" returns successfully" Jan 29 12:16:20.989703 containerd[1546]: time="2025-01-29T12:16:20.989646023Z" level=info msg="shim disconnected" id=3a37ea61fbe8ba26b4588416bbe2968247ce2021682b4d5ba5d414ce520396ee namespace=k8s.io Jan 29 12:16:20.989703 containerd[1546]: time="2025-01-29T12:16:20.989698824Z" level=warning msg="cleaning up after shim disconnected" id=3a37ea61fbe8ba26b4588416bbe2968247ce2021682b4d5ba5d414ce520396ee namespace=k8s.io Jan 29 12:16:20.989703 containerd[1546]: time="2025-01-29T12:16:20.989717625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:21.722646 kubelet[2713]: E0129 12:16:21.722588 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:21.917258 kubelet[2713]: E0129 12:16:21.916654 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:21.921266 containerd[1546]: time="2025-01-29T12:16:21.921231679Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:16:21.941682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905905178.mount: Deactivated successfully. Jan 29 12:16:21.949816 containerd[1546]: time="2025-01-29T12:16:21.949771078Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f071c587ba822e5569a2fa726c421bbe31b346a4688d66c25e428226c3068d1d\"" Jan 29 12:16:21.951404 containerd[1546]: time="2025-01-29T12:16:21.951281148Z" level=info msg="StartContainer for \"f071c587ba822e5569a2fa726c421bbe31b346a4688d66c25e428226c3068d1d\"" Jan 29 12:16:22.002489 containerd[1546]: time="2025-01-29T12:16:22.002325627Z" level=info msg="StartContainer for \"f071c587ba822e5569a2fa726c421bbe31b346a4688d66c25e428226c3068d1d\" returns successfully" Jan 29 12:16:22.032465 containerd[1546]: time="2025-01-29T12:16:22.032404277Z" level=info msg="shim disconnected" id=f071c587ba822e5569a2fa726c421bbe31b346a4688d66c25e428226c3068d1d namespace=k8s.io Jan 29 12:16:22.032465 containerd[1546]: time="2025-01-29T12:16:22.032458638Z" level=warning msg="cleaning up after shim disconnected" id=f071c587ba822e5569a2fa726c421bbe31b346a4688d66c25e428226c3068d1d namespace=k8s.io Jan 29 12:16:22.032465 containerd[1546]: time="2025-01-29T12:16:22.032467278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:22.675021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f071c587ba822e5569a2fa726c421bbe31b346a4688d66c25e428226c3068d1d-rootfs.mount: Deactivated successfully. Jan 29 12:16:22.920308 kubelet[2713]: E0129 12:16:22.920277 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:22.922654 containerd[1546]: time="2025-01-29T12:16:22.922614494Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:16:22.935074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528048862.mount: Deactivated successfully. Jan 29 12:16:22.937902 containerd[1546]: time="2025-01-29T12:16:22.937838342Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cfe11d3a685d610bfaf74d1efbec0b264a89481bcef3ec3c8318f20f38cfa57d\"" Jan 29 12:16:22.938479 containerd[1546]: time="2025-01-29T12:16:22.938457834Z" level=info msg="StartContainer for \"cfe11d3a685d610bfaf74d1efbec0b264a89481bcef3ec3c8318f20f38cfa57d\"" Jan 29 12:16:22.982301 containerd[1546]: time="2025-01-29T12:16:22.982259984Z" level=info msg="StartContainer for \"cfe11d3a685d610bfaf74d1efbec0b264a89481bcef3ec3c8318f20f38cfa57d\" returns successfully" Jan 29 12:16:23.010674 containerd[1546]: time="2025-01-29T12:16:23.010604594Z" level=info msg="shim disconnected" id=cfe11d3a685d610bfaf74d1efbec0b264a89481bcef3ec3c8318f20f38cfa57d namespace=k8s.io Jan 29 12:16:23.010674 containerd[1546]: time="2025-01-29T12:16:23.010663595Z" level=warning msg="cleaning up after shim disconnected" id=cfe11d3a685d610bfaf74d1efbec0b264a89481bcef3ec3c8318f20f38cfa57d namespace=k8s.io Jan 29 12:16:23.010674 containerd[1546]: time="2025-01-29T12:16:23.010673756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:23.675149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfe11d3a685d610bfaf74d1efbec0b264a89481bcef3ec3c8318f20f38cfa57d-rootfs.mount: Deactivated successfully. Jan 29 12:16:23.923578 kubelet[2713]: E0129 12:16:23.923529 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:23.927584 containerd[1546]: time="2025-01-29T12:16:23.927397177Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:16:23.940780 containerd[1546]: time="2025-01-29T12:16:23.940610018Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f3931d3e7c305f6d43c8f1b474b77b80aa12f2e6794bd6d6609f43930320884\"" Jan 29 12:16:23.941790 containerd[1546]: time="2025-01-29T12:16:23.941306591Z" level=info msg="StartContainer for \"5f3931d3e7c305f6d43c8f1b474b77b80aa12f2e6794bd6d6609f43930320884\"" Jan 29 12:16:23.985905 containerd[1546]: time="2025-01-29T12:16:23.985849086Z" level=info msg="StartContainer for \"5f3931d3e7c305f6d43c8f1b474b77b80aa12f2e6794bd6d6609f43930320884\" returns successfully" Jan 29 12:16:24.003932 containerd[1546]: time="2025-01-29T12:16:24.003859494Z" level=info msg="shim disconnected" id=5f3931d3e7c305f6d43c8f1b474b77b80aa12f2e6794bd6d6609f43930320884 namespace=k8s.io Jan 29 12:16:24.003932 containerd[1546]: time="2025-01-29T12:16:24.003917536Z" level=warning msg="cleaning up after shim disconnected" id=5f3931d3e7c305f6d43c8f1b474b77b80aa12f2e6794bd6d6609f43930320884 namespace=k8s.io Jan 29 12:16:24.003932 containerd[1546]: time="2025-01-29T12:16:24.003926096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:24.675166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f3931d3e7c305f6d43c8f1b474b77b80aa12f2e6794bd6d6609f43930320884-rootfs.mount: Deactivated successfully. Jan 29 12:16:24.774491 kubelet[2713]: E0129 12:16:24.774402 2713 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:16:24.928496 kubelet[2713]: E0129 12:16:24.928105 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:24.931647 containerd[1546]: time="2025-01-29T12:16:24.931611510Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:16:24.963824 containerd[1546]: time="2025-01-29T12:16:24.961443398Z" level=info msg="CreateContainer within sandbox \"7e5eb40f7efb292b93021ffdf7c84e0121fa51108fd2798675b87dfce7ced6dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47a0dd155ce324bd25fc7cbc8c4cb54d7504f20265d9e95876e3cafefc4938de\"" Jan 29 12:16:24.965362 containerd[1546]: time="2025-01-29T12:16:24.964157766Z" level=info msg="StartContainer for \"47a0dd155ce324bd25fc7cbc8c4cb54d7504f20265d9e95876e3cafefc4938de\"" Jan 29 12:16:25.019789 containerd[1546]: time="2025-01-29T12:16:25.019738498Z" level=info msg="StartContainer for \"47a0dd155ce324bd25fc7cbc8c4cb54d7504f20265d9e95876e3cafefc4938de\" returns successfully" Jan 29 12:16:25.280108 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 12:16:25.722736 kubelet[2713]: E0129 12:16:25.722688 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:25.722980 kubelet[2713]: E0129 12:16:25.722941 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:25.932925 kubelet[2713]: E0129 12:16:25.932895 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:25.947909 kubelet[2713]: I0129 12:16:25.947522 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lh4bz" podStartSLOduration=5.947504044 podStartE2EDuration="5.947504044s" podCreationTimestamp="2025-01-29 12:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:16:25.946795832 +0000 UTC m=+81.308555515" watchObservedRunningTime="2025-01-29 12:16:25.947504044 +0000 UTC m=+81.309263847" Jan 29 12:16:26.566594 kubelet[2713]: I0129 12:16:26.566069 2713 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T12:16:26Z","lastTransitionTime":"2025-01-29T12:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 12:16:26.935423 kubelet[2713]: E0129 12:16:26.935392 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:26.978475 systemd[1]: run-containerd-runc-k8s.io-47a0dd155ce324bd25fc7cbc8c4cb54d7504f20265d9e95876e3cafefc4938de-runc.4y31yP.mount: Deactivated successfully. Jan 29 12:16:27.937483 kubelet[2713]: E0129 12:16:27.937120 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:28.095340 systemd-networkd[1230]: lxc_health: Link UP Jan 29 12:16:28.109227 systemd-networkd[1230]: lxc_health: Gained carrier Jan 29 12:16:28.941398 kubelet[2713]: E0129 12:16:28.941158 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:29.940820 kubelet[2713]: E0129 12:16:29.940751 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:30.039289 systemd-networkd[1230]: lxc_health: Gained IPv6LL Jan 29 12:16:31.261283 systemd[1]: run-containerd-runc-k8s.io-47a0dd155ce324bd25fc7cbc8c4cb54d7504f20265d9e95876e3cafefc4938de-runc.SyFQKJ.mount: Deactivated successfully. Jan 29 12:16:33.428491 systemd[1]: run-containerd-runc-k8s.io-47a0dd155ce324bd25fc7cbc8c4cb54d7504f20265d9e95876e3cafefc4938de-runc.aA9INE.mount: Deactivated successfully. Jan 29 12:16:33.477487 sshd[4543]: pam_unix(sshd:session): session closed for user core Jan 29 12:16:33.479978 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:57118.service: Deactivated successfully. Jan 29 12:16:33.482891 systemd-logind[1526]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:16:33.483554 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:16:33.486604 systemd-logind[1526]: Removed session 26.