Jan 17 11:59:12.912770 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 11:59:12.912792 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 11:59:12.912803 kernel: KASLR enabled Jan 17 11:59:12.912808 kernel: efi: EFI v2.7 by EDK II Jan 17 11:59:12.912814 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 17 11:59:12.912820 kernel: random: crng init done Jan 17 11:59:12.912827 kernel: ACPI: Early table checksum verification disabled Jan 17 11:59:12.912833 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 17 11:59:12.912839 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 17 11:59:12.912847 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912853 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912859 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912865 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912871 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912878 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912886 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912892 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912899 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:59:12.912905 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 17 11:59:12.912911 kernel: NUMA: Failed to initialise from firmware Jan 17 11:59:12.912918 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:59:12.913059 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 17 11:59:12.913070 kernel: Zone ranges: Jan 17 11:59:12.913077 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:59:12.913083 kernel: DMA32 empty Jan 17 11:59:12.913094 kernel: Normal empty Jan 17 11:59:12.913101 kernel: Movable zone start for each node Jan 17 11:59:12.913107 kernel: Early memory node ranges Jan 17 11:59:12.913113 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 17 11:59:12.913120 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 17 11:59:12.913126 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 17 11:59:12.913132 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 17 11:59:12.913139 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 17 11:59:12.913145 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 17 11:59:12.913151 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 17 11:59:12.913158 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:59:12.913164 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 17 11:59:12.913172 kernel: psci: probing for conduit method from ACPI. Jan 17 11:59:12.913178 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 11:59:12.913184 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 11:59:12.913193 kernel: psci: Trusted OS migration not required Jan 17 11:59:12.913200 kernel: psci: SMC Calling Convention v1.1 Jan 17 11:59:12.913207 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 11:59:12.913215 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 11:59:12.913222 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 11:59:12.913229 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 17 11:59:12.913235 kernel: Detected PIPT I-cache on CPU0 Jan 17 11:59:12.913242 kernel: CPU features: detected: GIC system register CPU interface Jan 17 11:59:12.913249 kernel: CPU features: detected: Hardware dirty bit management Jan 17 11:59:12.913255 kernel: CPU features: detected: Spectre-v4 Jan 17 11:59:12.913262 kernel: CPU features: detected: Spectre-BHB Jan 17 11:59:12.913269 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 11:59:12.913275 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 11:59:12.913284 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 11:59:12.913290 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 11:59:12.913297 kernel: alternatives: applying boot alternatives Jan 17 11:59:12.913305 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 11:59:12.913312 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 11:59:12.913319 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 11:59:12.913326 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 11:59:12.913333 kernel: Fallback order for Node 0: 0 Jan 17 11:59:12.913339 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 17 11:59:12.913346 kernel: Policy zone: DMA Jan 17 11:59:12.913360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 11:59:12.913371 kernel: software IO TLB: area num 4. Jan 17 11:59:12.913378 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 17 11:59:12.913385 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 17 11:59:12.913392 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 11:59:12.913399 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 11:59:12.913406 kernel: rcu: RCU event tracing is enabled. Jan 17 11:59:12.913413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 11:59:12.913420 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 11:59:12.913426 kernel: Tracing variant of Tasks RCU enabled. Jan 17 11:59:12.913433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 11:59:12.913440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 11:59:12.913447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 11:59:12.913455 kernel: GICv3: 256 SPIs implemented Jan 17 11:59:12.913462 kernel: GICv3: 0 Extended SPIs implemented Jan 17 11:59:12.913469 kernel: Root IRQ handler: gic_handle_irq Jan 17 11:59:12.913475 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 11:59:12.913482 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 11:59:12.913489 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 11:59:12.913495 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 11:59:12.913502 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 11:59:12.913509 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 17 11:59:12.913516 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 17 11:59:12.913523 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 11:59:12.913531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:59:12.913538 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 11:59:12.913545 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 11:59:12.913551 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 11:59:12.913558 kernel: arm-pv: using stolen time PV Jan 17 11:59:12.913565 kernel: Console: colour dummy device 80x25 Jan 17 11:59:12.913572 kernel: ACPI: Core revision 20230628 Jan 17 11:59:12.913579 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 11:59:12.913586 kernel: pid_max: default: 32768 minimum: 301 Jan 17 11:59:12.913593 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 11:59:12.913614 kernel: landlock: Up and running. Jan 17 11:59:12.913623 kernel: SELinux: Initializing. Jan 17 11:59:12.913630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 11:59:12.913637 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 11:59:12.913644 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 11:59:12.913652 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 11:59:12.913659 kernel: rcu: Hierarchical SRCU implementation. Jan 17 11:59:12.913666 kernel: rcu: Max phase no-delay instances is 400. Jan 17 11:59:12.913673 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 11:59:12.913681 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 11:59:12.913688 kernel: Remapping and enabling EFI services. Jan 17 11:59:12.913695 kernel: smp: Bringing up secondary CPUs ... Jan 17 11:59:12.913702 kernel: Detected PIPT I-cache on CPU1 Jan 17 11:59:12.913709 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 11:59:12.913716 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 17 11:59:12.913723 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:59:12.913730 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 11:59:12.913737 kernel: Detected PIPT I-cache on CPU2 Jan 17 11:59:12.913744 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 17 11:59:12.913753 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 17 11:59:12.913760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:59:12.913771 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 17 11:59:12.913782 kernel: Detected PIPT I-cache on CPU3 Jan 17 11:59:12.913789 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 17 11:59:12.913796 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 17 11:59:12.913803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:59:12.913810 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 17 11:59:12.913818 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 11:59:12.913826 kernel: SMP: Total of 4 processors activated. Jan 17 11:59:12.913834 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 11:59:12.913841 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 11:59:12.913849 kernel: CPU features: detected: Common not Private translations Jan 17 11:59:12.913856 kernel: CPU features: detected: CRC32 instructions Jan 17 11:59:12.913863 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 11:59:12.913870 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 11:59:12.913878 kernel: CPU features: detected: LSE atomic instructions Jan 17 11:59:12.913886 kernel: CPU features: detected: Privileged Access Never Jan 17 11:59:12.913893 kernel: CPU features: detected: RAS Extension Support Jan 17 11:59:12.913901 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 11:59:12.913908 kernel: CPU: All CPU(s) started at EL1 Jan 17 11:59:12.913915 kernel: alternatives: applying system-wide alternatives Jan 17 11:59:12.913922 kernel: devtmpfs: initialized Jan 17 11:59:12.913930 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 11:59:12.913937 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 11:59:12.913945 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 11:59:12.913953 kernel: SMBIOS 3.0.0 present. Jan 17 11:59:12.913961 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 17 11:59:12.913968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 11:59:12.913976 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 11:59:12.913983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 11:59:12.913990 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 11:59:12.913998 kernel: audit: initializing netlink subsys (disabled) Jan 17 11:59:12.914005 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 17 11:59:12.914012 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 11:59:12.914021 kernel: cpuidle: using governor menu Jan 17 11:59:12.914028 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 11:59:12.914036 kernel: ASID allocator initialised with 32768 entries Jan 17 11:59:12.914043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 11:59:12.914050 kernel: Serial: AMBA PL011 UART driver Jan 17 11:59:12.914057 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 11:59:12.914064 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 11:59:12.914072 kernel: Modules: 509040 pages in range for PLT usage Jan 17 11:59:12.914079 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 11:59:12.914088 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 11:59:12.914095 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 11:59:12.914102 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 11:59:12.914109 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 11:59:12.914117 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 11:59:12.914124 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 11:59:12.914131 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 11:59:12.914138 kernel: ACPI: Added _OSI(Module Device) Jan 17 11:59:12.914146 kernel: ACPI: Added _OSI(Processor Device) Jan 17 11:59:12.914154 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 11:59:12.914162 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 11:59:12.914169 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 11:59:12.914176 kernel: ACPI: Interpreter enabled Jan 17 11:59:12.914184 kernel: ACPI: Using GIC for interrupt routing Jan 17 11:59:12.914191 kernel: ACPI: MCFG table detected, 1 entries Jan 17 11:59:12.914198 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 11:59:12.914205 kernel: printk: console [ttyAMA0] enabled Jan 17 11:59:12.914213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 11:59:12.914427 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 11:59:12.914506 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 11:59:12.914570 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 11:59:12.914711 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 11:59:12.914778 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 11:59:12.914788 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 11:59:12.914795 kernel: PCI host bridge to bus 0000:00 Jan 17 11:59:12.914870 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 11:59:12.914928 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 11:59:12.914984 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 11:59:12.915039 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 11:59:12.915115 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 11:59:12.915188 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 11:59:12.915256 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 17 11:59:12.915320 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 17 11:59:12.915395 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 11:59:12.915461 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 11:59:12.915525 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 17 11:59:12.915589 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 17 11:59:12.915658 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 11:59:12.915720 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 11:59:12.915778 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 11:59:12.915787 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 11:59:12.915795 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 11:59:12.915802 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 11:59:12.915810 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 11:59:12.915817 kernel: iommu: Default domain type: Translated Jan 17 11:59:12.915824 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 11:59:12.915832 kernel: efivars: Registered efivars operations Jan 17 11:59:12.915841 kernel: vgaarb: loaded Jan 17 11:59:12.915848 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 11:59:12.915855 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 11:59:12.915863 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 11:59:12.915870 kernel: pnp: PnP ACPI init Jan 17 11:59:12.915947 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 11:59:12.915957 kernel: pnp: PnP ACPI: found 1 devices Jan 17 11:59:12.915964 kernel: NET: Registered PF_INET protocol family Jan 17 11:59:12.915974 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 11:59:12.915981 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 11:59:12.915989 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 11:59:12.915996 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 11:59:12.916004 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 11:59:12.916011 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 11:59:12.916018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 11:59:12.916026 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 11:59:12.916033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 11:59:12.916041 kernel: PCI: CLS 0 bytes, default 64 Jan 17 11:59:12.916049 kernel: kvm [1]: HYP mode not available Jan 17 11:59:12.916056 kernel: Initialise system trusted keyrings Jan 17 11:59:12.916063 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 11:59:12.916071 kernel: Key type asymmetric registered Jan 17 11:59:12.916078 kernel: Asymmetric key parser 'x509' registered Jan 17 11:59:12.916085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 11:59:12.916092 kernel: io scheduler mq-deadline registered Jan 17 11:59:12.916099 kernel: io scheduler kyber registered Jan 17 11:59:12.916108 kernel: io scheduler bfq registered Jan 17 11:59:12.916115 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 11:59:12.916123 kernel: ACPI: button: Power Button [PWRB] Jan 17 11:59:12.916130 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 11:59:12.916195 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 17 11:59:12.916205 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 11:59:12.916212 kernel: thunder_xcv, ver 1.0 Jan 17 11:59:12.916219 kernel: thunder_bgx, ver 1.0 Jan 17 11:59:12.916227 kernel: nicpf, ver 1.0 Jan 17 11:59:12.916235 kernel: nicvf, ver 1.0 Jan 17 11:59:12.916305 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 11:59:12.916376 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T11:59:12 UTC (1737115152) Jan 17 11:59:12.916387 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 11:59:12.916395 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 11:59:12.916402 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 11:59:12.916409 kernel: watchdog: Hard watchdog permanently disabled Jan 17 11:59:12.916417 kernel: NET: Registered PF_INET6 protocol family Jan 17 11:59:12.916426 kernel: Segment Routing with IPv6 Jan 17 11:59:12.916434 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 11:59:12.916441 kernel: NET: Registered PF_PACKET protocol family Jan 17 11:59:12.916449 kernel: Key type dns_resolver registered Jan 17 11:59:12.916456 kernel: registered taskstats version 1 Jan 17 11:59:12.916463 kernel: Loading compiled-in X.509 certificates Jan 17 11:59:12.916471 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 11:59:12.916478 kernel: Key type .fscrypt registered Jan 17 11:59:12.916485 kernel: Key type fscrypt-provisioning registered Jan 17 11:59:12.916493 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 11:59:12.916501 kernel: ima: Allocated hash algorithm: sha1 Jan 17 11:59:12.916508 kernel: ima: No architecture policies found Jan 17 11:59:12.916515 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 11:59:12.916523 kernel: clk: Disabling unused clocks Jan 17 11:59:12.916530 kernel: Freeing unused kernel memory: 39360K Jan 17 11:59:12.916537 kernel: Run /init as init process Jan 17 11:59:12.916545 kernel: with arguments: Jan 17 11:59:12.916552 kernel: /init Jan 17 11:59:12.916560 kernel: with environment: Jan 17 11:59:12.916567 kernel: HOME=/ Jan 17 11:59:12.916575 kernel: TERM=linux Jan 17 11:59:12.916582 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 11:59:12.916591 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 11:59:12.916600 systemd[1]: Detected virtualization kvm. Jan 17 11:59:12.916641 systemd[1]: Detected architecture arm64. Jan 17 11:59:12.916649 systemd[1]: Running in initrd. Jan 17 11:59:12.916659 systemd[1]: No hostname configured, using default hostname. Jan 17 11:59:12.916667 systemd[1]: Hostname set to . Jan 17 11:59:12.916675 systemd[1]: Initializing machine ID from VM UUID. Jan 17 11:59:12.916682 systemd[1]: Queued start job for default target initrd.target. Jan 17 11:59:12.916690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:59:12.916698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:59:12.916706 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 11:59:12.916714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 11:59:12.916724 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 11:59:12.916732 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 11:59:12.916741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 11:59:12.916749 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 11:59:12.916757 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:59:12.916765 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:59:12.916774 systemd[1]: Reached target paths.target - Path Units. Jan 17 11:59:12.916782 systemd[1]: Reached target slices.target - Slice Units. Jan 17 11:59:12.916789 systemd[1]: Reached target swap.target - Swaps. Jan 17 11:59:12.916797 systemd[1]: Reached target timers.target - Timer Units. Jan 17 11:59:12.916805 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 11:59:12.916813 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 11:59:12.916821 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 11:59:12.916828 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 11:59:12.916836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:59:12.916845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 11:59:12.916853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:59:12.916861 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 11:59:12.916869 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 11:59:12.916877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 11:59:12.916885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 11:59:12.916892 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 11:59:12.916900 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 11:59:12.916908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 11:59:12.916917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:59:12.916925 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 11:59:12.916932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:59:12.916940 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 11:59:12.916949 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 11:59:12.916977 systemd-journald[239]: Collecting audit messages is disabled. Jan 17 11:59:12.916996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:12.917005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:59:12.917015 systemd-journald[239]: Journal started Jan 17 11:59:12.917033 systemd-journald[239]: Runtime Journal (/run/log/journal/07c668562b6748dab3d5e21643e3a0a1) is 5.9M, max 47.3M, 41.4M free. Jan 17 11:59:12.908526 systemd-modules-load[240]: Inserted module 'overlay' Jan 17 11:59:12.920795 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 11:59:12.925628 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 11:59:12.927526 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 17 11:59:12.928544 kernel: Bridge firewalling registered Jan 17 11:59:12.938768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:59:12.941040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 11:59:12.942782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 11:59:12.944680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 11:59:12.951768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:59:12.952998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:59:12.955258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:59:12.961236 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:59:12.982750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 11:59:12.983916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:59:12.988859 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 11:59:12.994483 dracut-cmdline[274]: dracut-dracut-053 Jan 17 11:59:12.996145 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 11:59:13.028015 systemd-resolved[280]: Positive Trust Anchors: Jan 17 11:59:13.028029 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 11:59:13.028061 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 11:59:13.036907 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 17 11:59:13.037873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 11:59:13.040147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:59:13.062631 kernel: SCSI subsystem initialized Jan 17 11:59:13.066634 kernel: Loading iSCSI transport class v2.0-870. Jan 17 11:59:13.074635 kernel: iscsi: registered transport (tcp) Jan 17 11:59:13.087637 kernel: iscsi: registered transport (qla4xxx) Jan 17 11:59:13.087672 kernel: QLogic iSCSI HBA Driver Jan 17 11:59:13.127753 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 11:59:13.136741 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 11:59:13.153971 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 11:59:13.154018 kernel: device-mapper: uevent: version 1.0.3 Jan 17 11:59:13.157635 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 11:59:13.206639 kernel: raid6: neonx8 gen() 15585 MB/s Jan 17 11:59:13.223630 kernel: raid6: neonx4 gen() 15465 MB/s Jan 17 11:59:13.240635 kernel: raid6: neonx2 gen() 13010 MB/s Jan 17 11:59:13.257630 kernel: raid6: neonx1 gen() 10388 MB/s Jan 17 11:59:13.274627 kernel: raid6: int64x8 gen() 6158 MB/s Jan 17 11:59:13.291634 kernel: raid6: int64x4 gen() 7343 MB/s Jan 17 11:59:13.308628 kernel: raid6: int64x2 gen() 6125 MB/s Jan 17 11:59:13.325717 kernel: raid6: int64x1 gen() 5049 MB/s Jan 17 11:59:13.325739 kernel: raid6: using algorithm neonx8 gen() 15585 MB/s Jan 17 11:59:13.343696 kernel: raid6: .... xor() 11925 MB/s, rmw enabled Jan 17 11:59:13.343726 kernel: raid6: using neon recovery algorithm Jan 17 11:59:13.349123 kernel: xor: measuring software checksum speed Jan 17 11:59:13.349138 kernel: 8regs : 19759 MB/sec Jan 17 11:59:13.349836 kernel: 32regs : 19641 MB/sec Jan 17 11:59:13.351080 kernel: arm64_neon : 26708 MB/sec Jan 17 11:59:13.351092 kernel: xor: using function: arm64_neon (26708 MB/sec) Jan 17 11:59:13.402628 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 11:59:13.412675 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 11:59:13.424811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:59:13.436450 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 17 11:59:13.439588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:59:13.454772 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 11:59:13.465655 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jan 17 11:59:13.492183 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 11:59:13.504828 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 11:59:13.543793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:59:13.552752 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 11:59:13.564508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 11:59:13.565953 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 11:59:13.567862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:59:13.570025 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 11:59:13.578941 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 11:59:13.587228 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 17 11:59:13.590735 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 11:59:13.590824 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 11:59:13.590835 kernel: GPT:9289727 != 19775487 Jan 17 11:59:13.590844 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 11:59:13.590853 kernel: GPT:9289727 != 19775487 Jan 17 11:59:13.590862 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 11:59:13.590871 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:59:13.592708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 11:59:13.594842 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 11:59:13.594958 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:59:13.601803 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:59:13.610536 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (509) Jan 17 11:59:13.604633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 11:59:13.605836 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:13.610890 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:59:13.617032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:59:13.620093 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Jan 17 11:59:13.628067 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 11:59:13.629482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:13.643664 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 11:59:13.647590 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 11:59:13.648824 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 11:59:13.655186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 11:59:13.666722 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 11:59:13.668425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:59:13.673519 disk-uuid[553]: Primary Header is updated. Jan 17 11:59:13.673519 disk-uuid[553]: Secondary Entries is updated. Jan 17 11:59:13.673519 disk-uuid[553]: Secondary Header is updated. Jan 17 11:59:13.677627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:59:13.691265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:59:14.690625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:59:14.691026 disk-uuid[554]: The operation has completed successfully. Jan 17 11:59:14.719377 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 11:59:14.719477 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 11:59:14.739753 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 11:59:14.744882 sh[576]: Success Jan 17 11:59:14.758642 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 11:59:14.794098 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 11:59:14.818070 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 11:59:14.819641 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 11:59:14.832634 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 11:59:14.832685 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:14.832697 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 11:59:14.833938 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 11:59:14.834653 kernel: BTRFS info (device dm-0): using free space tree Jan 17 11:59:14.838293 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 11:59:14.839671 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 11:59:14.840388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 11:59:14.843233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 11:59:14.854478 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:14.854521 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:14.854543 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:59:14.857655 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:59:14.863875 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 11:59:14.865756 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:14.870826 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 11:59:14.877754 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 11:59:14.934214 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 11:59:14.944766 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 11:59:14.977658 systemd-networkd[766]: lo: Link UP Jan 17 11:59:14.977668 systemd-networkd[766]: lo: Gained carrier Jan 17 11:59:14.977876 ignition[675]: Ignition 2.19.0 Jan 17 11:59:14.978307 systemd-networkd[766]: Enumeration completed Jan 17 11:59:14.977882 ignition[675]: Stage: fetch-offline Jan 17 11:59:14.978587 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 11:59:14.977913 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:14.978958 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:14.977921 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:14.978963 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 11:59:14.978124 ignition[675]: parsed url from cmdline: "" Jan 17 11:59:14.980986 systemd-networkd[766]: eth0: Link UP Jan 17 11:59:14.978127 ignition[675]: no config URL provided Jan 17 11:59:14.980990 systemd-networkd[766]: eth0: Gained carrier Jan 17 11:59:14.978132 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 11:59:14.980997 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:14.978139 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 11:59:14.981032 systemd[1]: Reached target network.target - Network. Jan 17 11:59:14.978159 ignition[675]: op(1): [started] loading QEMU firmware config module Jan 17 11:59:14.978164 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 11:59:14.991433 ignition[675]: op(1): [finished] loading QEMU firmware config module Jan 17 11:59:15.003648 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 11:59:15.036316 ignition[675]: parsing config with SHA512: 33746a3690777fc318c58aea0c998f4ad65a557dd964cab72a88d526ea0134ad189aeb9405019686e4787c267faab9bfbe7e5e3c13ce6cb23cda69df61c16b1d Jan 17 11:59:15.045654 unknown[675]: fetched base config from "system" Jan 17 11:59:15.045666 unknown[675]: fetched user config from "qemu" Jan 17 11:59:15.046110 ignition[675]: fetch-offline: fetch-offline passed Jan 17 11:59:15.047786 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 11:59:15.046172 ignition[675]: Ignition finished successfully Jan 17 11:59:15.050042 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 11:59:15.058833 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 11:59:15.069410 ignition[773]: Ignition 2.19.0 Jan 17 11:59:15.069420 ignition[773]: Stage: kargs Jan 17 11:59:15.069585 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:15.069595 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:15.073069 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 11:59:15.070488 ignition[773]: kargs: kargs passed Jan 17 11:59:15.075469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 11:59:15.070531 ignition[773]: Ignition finished successfully Jan 17 11:59:15.088907 ignition[782]: Ignition 2.19.0 Jan 17 11:59:15.088917 ignition[782]: Stage: disks Jan 17 11:59:15.089086 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:15.091925 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 11:59:15.089096 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:15.093046 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 11:59:15.089988 ignition[782]: disks: disks passed Jan 17 11:59:15.094738 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 11:59:15.090033 ignition[782]: Ignition finished successfully Jan 17 11:59:15.096698 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 11:59:15.098476 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 11:59:15.099887 systemd[1]: Reached target basic.target - Basic System. Jan 17 11:59:15.109809 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 11:59:15.120093 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 11:59:15.124257 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 11:59:15.129717 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 11:59:15.170475 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 11:59:15.172082 kernel: EXT4-fs (vda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 11:59:15.171796 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 11:59:15.185714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 11:59:15.187383 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 11:59:15.188768 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 11:59:15.188810 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 11:59:15.197549 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Jan 17 11:59:15.197580 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:15.197591 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:15.197601 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:59:15.188833 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 11:59:15.201192 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:59:15.195906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 11:59:15.211752 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 11:59:15.213744 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 11:59:15.254279 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 11:59:15.258267 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jan 17 11:59:15.262582 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 11:59:15.267001 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 11:59:15.345678 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 11:59:15.359699 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 11:59:15.361290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 11:59:15.367648 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:15.382098 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 11:59:15.388475 ignition[917]: INFO : Ignition 2.19.0 Jan 17 11:59:15.388475 ignition[917]: INFO : Stage: mount Jan 17 11:59:15.390758 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:15.390758 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:15.390758 ignition[917]: INFO : mount: mount passed Jan 17 11:59:15.390758 ignition[917]: INFO : Ignition finished successfully Jan 17 11:59:15.391196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 11:59:15.399055 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 11:59:15.831033 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 11:59:15.840792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 11:59:15.847276 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jan 17 11:59:15.847304 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:59:15.847314 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:59:15.848865 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:59:15.851627 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:59:15.852249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 11:59:15.869071 ignition[948]: INFO : Ignition 2.19.0 Jan 17 11:59:15.869071 ignition[948]: INFO : Stage: files Jan 17 11:59:15.870754 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:15.870754 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:15.870754 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jan 17 11:59:15.874129 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 11:59:15.874129 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 11:59:15.874129 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 11:59:15.874129 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 11:59:15.874129 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 11:59:15.873340 unknown[948]: wrote ssh authorized keys file for user: core Jan 17 11:59:15.881487 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 11:59:15.881487 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 11:59:15.930904 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 11:59:16.040357 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 11:59:16.042447 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 11:59:16.042447 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 11:59:16.105724 systemd-networkd[766]: eth0: Gained IPv6LL Jan 17 11:59:16.327777 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 11:59:16.378026 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 11:59:16.378026 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 11:59:16.381460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 17 11:59:16.554325 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 11:59:16.768036 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 11:59:16.768036 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 11:59:16.771547 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 11:59:16.792058 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 11:59:16.795309 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 11:59:16.797741 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 11:59:16.797741 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 11:59:16.797741 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 11:59:16.797741 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 11:59:16.797741 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 11:59:16.797741 ignition[948]: INFO : files: files passed Jan 17 11:59:16.797741 ignition[948]: INFO : Ignition finished successfully Jan 17 11:59:16.798046 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 11:59:16.806781 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 11:59:16.809350 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 11:59:16.811788 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 11:59:16.812669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 11:59:16.816825 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 11:59:16.820432 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:59:16.820432 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:59:16.823442 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:59:16.823703 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 11:59:16.826269 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 11:59:16.839750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 11:59:16.857887 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 11:59:16.858680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 11:59:16.860123 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 11:59:16.861962 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 11:59:16.863706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 11:59:16.872764 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 11:59:16.884813 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 11:59:16.895765 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 11:59:16.903648 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:59:16.904833 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:59:16.906784 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 11:59:16.908504 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 11:59:16.908639 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 11:59:16.911087 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 11:59:16.913040 systemd[1]: Stopped target basic.target - Basic System. Jan 17 11:59:16.914623 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 11:59:16.916301 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 11:59:16.918182 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 11:59:16.920079 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 11:59:16.921855 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 11:59:16.923749 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 11:59:16.925681 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 11:59:16.927428 systemd[1]: Stopped target swap.target - Swaps. Jan 17 11:59:16.928911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 11:59:16.929035 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 11:59:16.931307 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:59:16.933250 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:59:16.935119 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 11:59:16.938666 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:59:16.939886 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 11:59:16.940007 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 11:59:16.942713 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 11:59:16.942834 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 11:59:16.944829 systemd[1]: Stopped target paths.target - Path Units. Jan 17 11:59:16.946385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 11:59:16.949694 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:59:16.951016 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 11:59:16.953043 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 11:59:16.954551 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 11:59:16.954657 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 11:59:16.956178 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 11:59:16.956266 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 11:59:16.957743 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 11:59:16.957858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 11:59:16.959623 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 11:59:16.959733 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 11:59:16.973804 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 11:59:16.975368 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 11:59:16.976265 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 11:59:16.976404 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:59:16.978357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 11:59:16.978463 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 11:59:16.984807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 11:59:16.985757 ignition[1002]: INFO : Ignition 2.19.0 Jan 17 11:59:16.985757 ignition[1002]: INFO : Stage: umount Jan 17 11:59:16.985757 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:59:16.985757 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:59:16.991698 ignition[1002]: INFO : umount: umount passed Jan 17 11:59:16.991698 ignition[1002]: INFO : Ignition finished successfully Jan 17 11:59:16.986102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 11:59:16.988229 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 11:59:16.988318 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 11:59:16.991335 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 11:59:16.991834 systemd[1]: Stopped target network.target - Network. Jan 17 11:59:16.994726 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 11:59:16.994799 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 11:59:16.996904 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 11:59:16.996962 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 11:59:16.998882 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 11:59:16.998927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 11:59:17.000586 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 11:59:17.000664 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 11:59:17.002654 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 11:59:17.004320 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 11:59:17.010874 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 11:59:17.010987 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 11:59:17.013332 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 11:59:17.013387 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:59:17.017656 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 17 11:59:17.019049 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 11:59:17.019167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 11:59:17.020793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 11:59:17.020825 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:59:17.029697 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 11:59:17.030570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 11:59:17.030650 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 11:59:17.032591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 11:59:17.032646 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:59:17.034729 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 11:59:17.034773 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 11:59:17.037916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:59:17.047550 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 11:59:17.048733 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 11:59:17.052414 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 11:59:17.052551 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 11:59:17.055853 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 11:59:17.055965 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 11:59:17.058194 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 11:59:17.058330 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:59:17.061122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 11:59:17.061198 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 11:59:17.062971 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 11:59:17.063008 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:59:17.064712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 11:59:17.064763 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 11:59:17.067409 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 11:59:17.067457 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 11:59:17.070125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 11:59:17.070183 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:59:17.081813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 11:59:17.082915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 11:59:17.083001 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:59:17.085119 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 11:59:17.085166 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:59:17.087236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 11:59:17.087283 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:59:17.089465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 11:59:17.089511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:17.091835 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 11:59:17.093641 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 11:59:17.095403 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 11:59:17.097726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 11:59:17.108401 systemd[1]: Switching root. Jan 17 11:59:17.144916 systemd-journald[239]: Journal stopped Jan 17 11:59:17.892110 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 17 11:59:17.892170 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 11:59:17.892186 kernel: SELinux: policy capability open_perms=1 Jan 17 11:59:17.892199 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 11:59:17.892209 kernel: SELinux: policy capability always_check_network=0 Jan 17 11:59:17.892219 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 11:59:17.892232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 11:59:17.892242 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 11:59:17.892252 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 11:59:17.892261 kernel: audit: type=1403 audit(1737115157.336:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 11:59:17.892272 systemd[1]: Successfully loaded SELinux policy in 30.196ms. Jan 17 11:59:17.892289 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.244ms. Jan 17 11:59:17.892311 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 11:59:17.892323 systemd[1]: Detected virtualization kvm. Jan 17 11:59:17.892333 systemd[1]: Detected architecture arm64. Jan 17 11:59:17.892343 systemd[1]: Detected first boot. Jan 17 11:59:17.892356 systemd[1]: Initializing machine ID from VM UUID. Jan 17 11:59:17.892366 zram_generator::config[1048]: No configuration found. Jan 17 11:59:17.892377 systemd[1]: Populated /etc with preset unit settings. Jan 17 11:59:17.892388 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 11:59:17.892400 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 11:59:17.892410 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 11:59:17.892421 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 11:59:17.892432 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 11:59:17.892442 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 11:59:17.892452 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 11:59:17.892463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 11:59:17.892474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 11:59:17.892484 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 11:59:17.892496 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 11:59:17.892507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:59:17.892517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:59:17.892528 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 11:59:17.892538 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 11:59:17.892548 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 11:59:17.892559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 11:59:17.892569 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 11:59:17.892582 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:59:17.892593 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 11:59:17.892610 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 11:59:17.892622 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 11:59:17.892632 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 11:59:17.892644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:59:17.892655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 11:59:17.892665 systemd[1]: Reached target slices.target - Slice Units. Jan 17 11:59:17.892677 systemd[1]: Reached target swap.target - Swaps. Jan 17 11:59:17.892688 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 11:59:17.892698 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 11:59:17.892708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:59:17.892719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 11:59:17.892729 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:59:17.892739 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 11:59:17.892750 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 11:59:17.892760 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 11:59:17.892775 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 11:59:17.892785 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 11:59:17.892796 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 11:59:17.892806 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 11:59:17.892818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 11:59:17.892829 systemd[1]: Reached target machines.target - Containers. Jan 17 11:59:17.892840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 11:59:17.892905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:17.892920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 11:59:17.892934 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 11:59:17.892945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:17.892955 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 11:59:17.892966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:17.892976 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 11:59:17.892986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:17.892998 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 11:59:17.893008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 11:59:17.893019 kernel: fuse: init (API version 7.39) Jan 17 11:59:17.893029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 11:59:17.893040 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 11:59:17.893050 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 11:59:17.893060 kernel: loop: module loaded Jan 17 11:59:17.893069 kernel: ACPI: bus type drm_connector registered Jan 17 11:59:17.893079 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 11:59:17.893089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 11:59:17.893102 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 11:59:17.893114 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 11:59:17.893124 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 11:59:17.893155 systemd-journald[1119]: Collecting audit messages is disabled. Jan 17 11:59:17.893177 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 11:59:17.893188 systemd[1]: Stopped verity-setup.service. Jan 17 11:59:17.893198 systemd-journald[1119]: Journal started Jan 17 11:59:17.893221 systemd-journald[1119]: Runtime Journal (/run/log/journal/07c668562b6748dab3d5e21643e3a0a1) is 5.9M, max 47.3M, 41.4M free. Jan 17 11:59:17.690817 systemd[1]: Queued start job for default target multi-user.target. Jan 17 11:59:17.709698 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 11:59:17.710068 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 11:59:17.895623 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 11:59:17.897115 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 11:59:17.898270 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 11:59:17.899496 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 11:59:17.900627 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 11:59:17.901805 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 11:59:17.902986 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 11:59:17.904172 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 11:59:17.905563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:59:17.907037 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 11:59:17.907182 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 11:59:17.908631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:17.908764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:17.910089 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 11:59:17.910233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 11:59:17.911522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:17.911678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:17.913173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 11:59:17.913320 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 11:59:17.914668 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:17.914803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:17.916099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 11:59:17.917574 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 11:59:17.919074 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 11:59:17.931154 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 11:59:17.939698 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 11:59:17.941698 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 11:59:17.942774 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 11:59:17.942812 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 11:59:17.944685 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 11:59:17.946830 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 11:59:17.948879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 11:59:17.949962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:17.951140 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 11:59:17.953031 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 11:59:17.954233 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 11:59:17.957756 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 11:59:17.958927 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 11:59:17.961470 systemd-journald[1119]: Time spent on flushing to /var/log/journal/07c668562b6748dab3d5e21643e3a0a1 is 13.281ms for 857 entries. Jan 17 11:59:17.961470 systemd-journald[1119]: System Journal (/var/log/journal/07c668562b6748dab3d5e21643e3a0a1) is 8.0M, max 195.6M, 187.6M free. Jan 17 11:59:17.979553 systemd-journald[1119]: Received client request to flush runtime journal. Jan 17 11:59:17.961837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:59:17.970789 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 11:59:17.975773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 11:59:17.978241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:59:17.979778 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 11:59:17.981080 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 11:59:17.982622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 11:59:17.984107 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 11:59:17.986743 kernel: loop0: detected capacity change from 0 to 194512 Jan 17 11:59:17.986684 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 11:59:17.993234 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 11:59:17.996811 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 11:59:18.000834 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 11:59:18.003895 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 11:59:18.003872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:59:18.012031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 11:59:18.013506 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 11:59:18.020870 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 17 11:59:18.021186 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 17 11:59:18.021511 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 11:59:18.025866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:59:18.034830 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 11:59:18.040718 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 11:59:18.060645 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 11:59:18.071797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 11:59:18.074720 kernel: loop2: detected capacity change from 0 to 114432 Jan 17 11:59:18.084831 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 17 11:59:18.084844 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 17 11:59:18.089621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:59:18.118655 kernel: loop3: detected capacity change from 0 to 194512 Jan 17 11:59:18.123628 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 11:59:18.128620 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 11:59:18.130917 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 11:59:18.131280 (sd-merge)[1188]: Merged extensions into '/usr'. Jan 17 11:59:18.135968 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 11:59:18.135981 systemd[1]: Reloading... Jan 17 11:59:18.188636 zram_generator::config[1214]: No configuration found. Jan 17 11:59:18.230328 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 11:59:18.274551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:18.310312 systemd[1]: Reloading finished in 173 ms. Jan 17 11:59:18.343696 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 11:59:18.345193 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 11:59:18.357767 systemd[1]: Starting ensure-sysext.service... Jan 17 11:59:18.359662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 11:59:18.367649 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 17 11:59:18.367665 systemd[1]: Reloading... Jan 17 11:59:18.375954 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 11:59:18.376210 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 11:59:18.377201 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 11:59:18.377509 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 17 11:59:18.377686 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 17 11:59:18.379893 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 11:59:18.380002 systemd-tmpfiles[1249]: Skipping /boot Jan 17 11:59:18.387172 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 11:59:18.387275 systemd-tmpfiles[1249]: Skipping /boot Jan 17 11:59:18.418654 zram_generator::config[1274]: No configuration found. Jan 17 11:59:18.501696 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:18.537232 systemd[1]: Reloading finished in 169 ms. Jan 17 11:59:18.555686 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 11:59:18.563972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:59:18.571841 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 11:59:18.574496 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 11:59:18.576888 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 11:59:18.579906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 11:59:18.584939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:59:18.591543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 11:59:18.595361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:18.597811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:18.600720 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:18.603875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:18.607809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:18.615833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 11:59:18.617935 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 11:59:18.620461 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jan 17 11:59:18.621161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:18.624963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:18.626863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:18.626989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:18.628995 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:18.629135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:18.636320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:18.645880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:18.649911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:59:18.654483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:59:18.655950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:18.657311 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 11:59:18.659325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:59:18.661520 augenrules[1353]: No rules Jan 17 11:59:18.662618 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 11:59:18.664394 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 11:59:18.666304 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 11:59:18.668161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:18.668315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:18.680702 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 11:59:18.690519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:59:18.690669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:59:18.691619 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1369) Jan 17 11:59:18.692742 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:59:18.692868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:59:18.694522 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 11:59:18.698723 systemd[1]: Finished ensure-sysext.service. Jan 17 11:59:18.703986 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 11:59:18.707064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:59:18.712151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:59:18.716717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 11:59:18.717916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:59:18.731769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 11:59:18.733025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 11:59:18.735420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 11:59:18.738184 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 11:59:18.738663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:59:18.738831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:59:18.740262 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 11:59:18.740524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 11:59:18.752911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 11:59:18.771980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 11:59:18.782975 systemd-resolved[1318]: Positive Trust Anchors: Jan 17 11:59:18.782990 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 11:59:18.783024 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 11:59:18.784766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 11:59:18.790438 systemd-resolved[1318]: Defaulting to hostname 'linux'. Jan 17 11:59:18.801053 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 11:59:18.802337 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:59:18.812407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 11:59:18.821348 systemd-networkd[1390]: lo: Link UP Jan 17 11:59:18.821358 systemd-networkd[1390]: lo: Gained carrier Jan 17 11:59:18.822079 systemd-networkd[1390]: Enumeration completed Jan 17 11:59:18.822376 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 11:59:18.822591 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:18.822609 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 11:59:18.823306 systemd-networkd[1390]: eth0: Link UP Jan 17 11:59:18.823314 systemd-networkd[1390]: eth0: Gained carrier Jan 17 11:59:18.823328 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:59:18.824000 systemd[1]: Reached target network.target - Network. Jan 17 11:59:18.834830 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 11:59:18.836140 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 11:59:18.839099 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 11:59:18.841688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:59:18.841723 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 11:59:18.843019 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 17 11:59:19.275626 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 11:59:19.275680 systemd-timesyncd[1391]: Initial clock synchronization to Fri 2025-01-17 11:59:19.275519 UTC. Jan 17 11:59:19.275724 systemd-resolved[1318]: Clock change detected. Flushing caches. Jan 17 11:59:19.283109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 11:59:19.285790 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 11:59:19.302250 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 11:59:19.320422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:59:19.328348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 11:59:19.329817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:59:19.331145 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 11:59:19.332289 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 11:59:19.333527 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 11:59:19.334955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 11:59:19.336085 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 11:59:19.337287 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 11:59:19.338656 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 11:59:19.338690 systemd[1]: Reached target paths.target - Path Units. Jan 17 11:59:19.339595 systemd[1]: Reached target timers.target - Timer Units. Jan 17 11:59:19.341075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 11:59:19.343455 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 11:59:19.352857 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 11:59:19.355021 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 11:59:19.356533 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 11:59:19.357701 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 11:59:19.358713 systemd[1]: Reached target basic.target - Basic System. Jan 17 11:59:19.359695 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 11:59:19.359729 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 11:59:19.360625 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 11:59:19.363006 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 11:59:19.362619 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 11:59:19.366065 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 11:59:19.369166 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 11:59:19.371324 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 11:59:19.372347 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 11:59:19.380102 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 11:59:19.383071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 11:59:19.387976 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 11:59:19.396744 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 11:59:19.400312 jq[1417]: false Jan 17 11:59:19.401005 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 11:59:19.401465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 11:59:19.402222 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 11:59:19.406384 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 11:59:19.408303 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 11:59:19.410514 extend-filesystems[1418]: Found loop3 Jan 17 11:59:19.410514 extend-filesystems[1418]: Found loop4 Jan 17 11:59:19.410514 extend-filesystems[1418]: Found loop5 Jan 17 11:59:19.410514 extend-filesystems[1418]: Found vda Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda1 Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda2 Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda3 Jan 17 11:59:19.417368 extend-filesystems[1418]: Found usr Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda4 Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda6 Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda7 Jan 17 11:59:19.417368 extend-filesystems[1418]: Found vda9 Jan 17 11:59:19.417368 extend-filesystems[1418]: Checking size of /dev/vda9 Jan 17 11:59:19.436382 jq[1433]: true Jan 17 11:59:19.412297 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 11:59:19.415231 dbus-daemon[1416]: [system] SELinux support is enabled Jan 17 11:59:19.412450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 11:59:19.414221 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 11:59:19.414359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 11:59:19.419588 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 11:59:19.422546 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 11:59:19.422741 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 11:59:19.440159 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 11:59:19.442388 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 11:59:19.447112 tar[1436]: linux-arm64/helm Jan 17 11:59:19.442465 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 11:59:19.443983 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 11:59:19.444003 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 11:59:19.449743 jq[1438]: true Jan 17 11:59:19.461328 extend-filesystems[1418]: Resized partition /dev/vda9 Jan 17 11:59:19.468728 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Jan 17 11:59:19.478558 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 11:59:19.478264 systemd[1]: Started update-engine.service - Update Engine. Jan 17 11:59:19.478664 update_engine[1431]: I20250117 11:59:19.470376 1431 main.cc:92] Flatcar Update Engine starting Jan 17 11:59:19.483680 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1359) Jan 17 11:59:19.482124 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 11:59:19.483787 update_engine[1431]: I20250117 11:59:19.481321 1431 update_check_scheduler.cc:74] Next update check in 3m45s Jan 17 11:59:19.484859 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 11:59:19.485305 systemd-logind[1426]: New seat seat0. Jan 17 11:59:19.488986 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 11:59:19.520478 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 11:59:19.541278 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 11:59:19.541278 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 11:59:19.541278 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 11:59:19.550769 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Jan 17 11:59:19.545365 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 11:59:19.546246 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 11:59:19.560442 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Jan 17 11:59:19.562988 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 11:59:19.568163 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 11:59:19.596174 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 11:59:19.671744 containerd[1439]: time="2025-01-17T11:59:19.671384560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 11:59:19.700566 containerd[1439]: time="2025-01-17T11:59:19.700132280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701662200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701696320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701711360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701872400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701907360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701972040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.701984000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.702143240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.702156880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.702168960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:19.702846 containerd[1439]: time="2025-01-17T11:59:19.702178360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.703104 containerd[1439]: time="2025-01-17T11:59:19.702246120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.703104 containerd[1439]: time="2025-01-17T11:59:19.702430120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:59:19.703104 containerd[1439]: time="2025-01-17T11:59:19.702516920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:59:19.703104 containerd[1439]: time="2025-01-17T11:59:19.702531240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 11:59:19.703104 containerd[1439]: time="2025-01-17T11:59:19.702619600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 11:59:19.703104 containerd[1439]: time="2025-01-17T11:59:19.702672920Z" level=info msg="metadata content store policy set" policy=shared Jan 17 11:59:19.708487 containerd[1439]: time="2025-01-17T11:59:19.708456120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 11:59:19.708696 containerd[1439]: time="2025-01-17T11:59:19.708676840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 11:59:19.708853 containerd[1439]: time="2025-01-17T11:59:19.708836920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 11:59:19.708964 containerd[1439]: time="2025-01-17T11:59:19.708946560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 11:59:19.709082 containerd[1439]: time="2025-01-17T11:59:19.709065240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 11:59:19.709478 containerd[1439]: time="2025-01-17T11:59:19.709401240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 11:59:19.709987 containerd[1439]: time="2025-01-17T11:59:19.709964040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 11:59:19.710317 containerd[1439]: time="2025-01-17T11:59:19.710296680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 11:59:19.710449 containerd[1439]: time="2025-01-17T11:59:19.710432440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 11:59:19.710518 containerd[1439]: time="2025-01-17T11:59:19.710504920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710558480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710651600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710668280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710685880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710701200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710713400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710749360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710767720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710794280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710809560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710821800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710834800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710848120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711250 containerd[1439]: time="2025-01-17T11:59:19.710862200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710874360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710902760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710927360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710945840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710961640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710973560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.710985040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711005840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711028040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711040760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711052000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711169480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711187840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 11:59:19.711539 containerd[1439]: time="2025-01-17T11:59:19.711199120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 11:59:19.711781 containerd[1439]: time="2025-01-17T11:59:19.711211040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 11:59:19.711781 containerd[1439]: time="2025-01-17T11:59:19.711220080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.712170 containerd[1439]: time="2025-01-17T11:59:19.711233840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 11:59:19.712170 containerd[1439]: time="2025-01-17T11:59:19.712002360Z" level=info msg="NRI interface is disabled by configuration." Jan 17 11:59:19.712170 containerd[1439]: time="2025-01-17T11:59:19.712015720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 11:59:19.712759 containerd[1439]: time="2025-01-17T11:59:19.712693280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 11:59:19.713226 containerd[1439]: time="2025-01-17T11:59:19.713092920Z" level=info msg="Connect containerd service" Jan 17 11:59:19.713226 containerd[1439]: time="2025-01-17T11:59:19.713139920Z" level=info msg="using legacy CRI server" Jan 17 11:59:19.713226 containerd[1439]: time="2025-01-17T11:59:19.713148480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 11:59:19.713830 containerd[1439]: time="2025-01-17T11:59:19.713439840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 11:59:19.714659 containerd[1439]: time="2025-01-17T11:59:19.714632400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 11:59:19.715121 containerd[1439]: time="2025-01-17T11:59:19.715064120Z" level=info msg="Start subscribing containerd event" Jan 17 11:59:19.715473 containerd[1439]: time="2025-01-17T11:59:19.715454480Z" level=info msg="Start recovering state" Jan 17 11:59:19.715682 containerd[1439]: time="2025-01-17T11:59:19.715657640Z" level=info msg="Start event monitor" Jan 17 11:59:19.716063 containerd[1439]: time="2025-01-17T11:59:19.716044520Z" level=info msg="Start snapshots syncer" Jan 17 11:59:19.716247 containerd[1439]: time="2025-01-17T11:59:19.716114840Z" level=info msg="Start cni network conf syncer for default" Jan 17 11:59:19.716247 containerd[1439]: time="2025-01-17T11:59:19.716128760Z" level=info msg="Start streaming server" Jan 17 11:59:19.716775 containerd[1439]: time="2025-01-17T11:59:19.716667880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 11:59:19.716775 containerd[1439]: time="2025-01-17T11:59:19.716745280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 11:59:19.717330 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 11:59:19.719353 containerd[1439]: time="2025-01-17T11:59:19.719305800Z" level=info msg="containerd successfully booted in 0.049869s" Jan 17 11:59:19.842793 tar[1436]: linux-arm64/LICENSE Jan 17 11:59:19.843055 tar[1436]: linux-arm64/README.md Jan 17 11:59:19.860045 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 11:59:21.081001 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 17 11:59:21.083927 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 11:59:21.086051 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 11:59:21.097232 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 11:59:21.099626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:21.102063 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 11:59:21.121156 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 11:59:21.121335 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 11:59:21.123290 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 11:59:21.125525 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 11:59:21.315138 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 11:59:21.333774 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 11:59:21.344122 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 11:59:21.350087 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 11:59:21.350249 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 11:59:21.353430 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 11:59:21.364771 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 11:59:21.367709 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 11:59:21.370016 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 11:59:21.371571 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 11:59:21.576379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:21.577967 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 11:59:21.579099 systemd[1]: Startup finished in 569ms (kernel) + 4.628s (initrd) + 3.844s (userspace) = 9.042s. Jan 17 11:59:21.580327 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:59:22.060004 kubelet[1529]: E0117 11:59:22.059841 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:59:22.062558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:59:22.062710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:59:25.722318 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 11:59:25.723385 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Jan 17 11:59:25.779748 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:25.780821 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:25.791183 systemd-logind[1426]: New session 1 of user core. Jan 17 11:59:25.792877 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 11:59:25.808503 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 11:59:25.818828 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 11:59:25.821162 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 11:59:25.828182 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 11:59:25.924708 systemd[1548]: Queued start job for default target default.target. Jan 17 11:59:25.937755 systemd[1548]: Created slice app.slice - User Application Slice. Jan 17 11:59:25.937797 systemd[1548]: Reached target paths.target - Paths. Jan 17 11:59:25.937809 systemd[1548]: Reached target timers.target - Timers. Jan 17 11:59:25.948095 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 11:59:25.956799 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 11:59:25.956857 systemd[1548]: Reached target sockets.target - Sockets. Jan 17 11:59:25.956868 systemd[1548]: Reached target basic.target - Basic System. Jan 17 11:59:25.956995 systemd[1548]: Reached target default.target - Main User Target. Jan 17 11:59:25.957038 systemd[1548]: Startup finished in 123ms. Jan 17 11:59:25.957168 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 11:59:25.958398 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 11:59:26.021775 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:36668.service - OpenSSH per-connection server daemon (10.0.0.1:36668). Jan 17 11:59:26.062581 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 36668 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:26.064001 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:26.068309 systemd-logind[1426]: New session 2 of user core. Jan 17 11:59:26.082561 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 11:59:26.138097 sshd[1559]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:26.158265 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:36668.service: Deactivated successfully. Jan 17 11:59:26.159628 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 11:59:26.160745 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Jan 17 11:59:26.161841 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:36676.service - OpenSSH per-connection server daemon (10.0.0.1:36676). Jan 17 11:59:26.162559 systemd-logind[1426]: Removed session 2. Jan 17 11:59:26.198364 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 36676 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:26.199637 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:26.203022 systemd-logind[1426]: New session 3 of user core. Jan 17 11:59:26.211024 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 11:59:26.259728 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:26.269111 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:36676.service: Deactivated successfully. Jan 17 11:59:26.270427 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 11:59:26.271632 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Jan 17 11:59:26.272676 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:36692.service - OpenSSH per-connection server daemon (10.0.0.1:36692). Jan 17 11:59:26.273441 systemd-logind[1426]: Removed session 3. Jan 17 11:59:26.309497 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 36692 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:26.310770 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:26.314198 systemd-logind[1426]: New session 4 of user core. Jan 17 11:59:26.326016 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 11:59:26.377076 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:26.392019 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:36692.service: Deactivated successfully. Jan 17 11:59:26.393643 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 11:59:26.394874 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Jan 17 11:59:26.396415 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:36708.service - OpenSSH per-connection server daemon (10.0.0.1:36708). Jan 17 11:59:26.397219 systemd-logind[1426]: Removed session 4. Jan 17 11:59:26.432151 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 36708 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:26.433288 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:26.436955 systemd-logind[1426]: New session 5 of user core. Jan 17 11:59:26.447078 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 11:59:26.506389 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 11:59:26.506691 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:26.520716 sudo[1583]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:26.522401 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:26.533153 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:36708.service: Deactivated successfully. Jan 17 11:59:26.534531 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 11:59:26.535687 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Jan 17 11:59:26.536824 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:36710.service - OpenSSH per-connection server daemon (10.0.0.1:36710). Jan 17 11:59:26.537610 systemd-logind[1426]: Removed session 5. Jan 17 11:59:26.573379 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 36710 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:26.574550 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:26.578085 systemd-logind[1426]: New session 6 of user core. Jan 17 11:59:26.592078 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 11:59:26.641707 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 11:59:26.641998 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:26.644733 sudo[1592]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:26.649031 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 11:59:26.649546 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:26.669114 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 11:59:26.670139 auditctl[1595]: No rules Jan 17 11:59:26.670863 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 11:59:26.672936 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 11:59:26.674351 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 11:59:26.695828 augenrules[1613]: No rules Jan 17 11:59:26.697970 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 11:59:26.699278 sudo[1591]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:26.701308 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:26.711050 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:36710.service: Deactivated successfully. Jan 17 11:59:26.712425 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 11:59:26.715001 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Jan 17 11:59:26.716109 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:36724.service - OpenSSH per-connection server daemon (10.0.0.1:36724). Jan 17 11:59:26.716788 systemd-logind[1426]: Removed session 6. Jan 17 11:59:26.752666 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 36724 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:59:26.753797 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:59:26.757586 systemd-logind[1426]: New session 7 of user core. Jan 17 11:59:26.770016 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 11:59:26.820220 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 11:59:26.820834 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:59:27.124166 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 11:59:27.124260 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 11:59:27.382088 dockerd[1641]: time="2025-01-17T11:59:27.381965520Z" level=info msg="Starting up" Jan 17 11:59:27.518551 dockerd[1641]: time="2025-01-17T11:59:27.518504360Z" level=info msg="Loading containers: start." Jan 17 11:59:27.596164 kernel: Initializing XFRM netlink socket Jan 17 11:59:27.647517 systemd-networkd[1390]: docker0: Link UP Jan 17 11:59:27.665957 dockerd[1641]: time="2025-01-17T11:59:27.665925640Z" level=info msg="Loading containers: done." Jan 17 11:59:27.678298 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2713910053-merged.mount: Deactivated successfully. Jan 17 11:59:27.679857 dockerd[1641]: time="2025-01-17T11:59:27.679729480Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 11:59:27.679857 dockerd[1641]: time="2025-01-17T11:59:27.679826080Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 11:59:27.679995 dockerd[1641]: time="2025-01-17T11:59:27.679979880Z" level=info msg="Daemon has completed initialization" Jan 17 11:59:27.707933 dockerd[1641]: time="2025-01-17T11:59:27.707778200Z" level=info msg="API listen on /run/docker.sock" Jan 17 11:59:27.708035 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 11:59:28.391929 containerd[1439]: time="2025-01-17T11:59:28.391866720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 11:59:29.017364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123334829.mount: Deactivated successfully. Jan 17 11:59:30.017184 containerd[1439]: time="2025-01-17T11:59:30.017126800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:30.017598 containerd[1439]: time="2025-01-17T11:59:30.017559360Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=32202459" Jan 17 11:59:30.018487 containerd[1439]: time="2025-01-17T11:59:30.018449240Z" level=info msg="ImageCreate event name:\"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:30.024368 containerd[1439]: time="2025-01-17T11:59:30.023687680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:30.024758 containerd[1439]: time="2025-01-17T11:59:30.024719640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"32199257\" in 1.63278976s" Jan 17 11:59:30.024795 containerd[1439]: time="2025-01-17T11:59:30.024759600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\"" Jan 17 11:59:30.043403 containerd[1439]: time="2025-01-17T11:59:30.043346080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 11:59:31.416059 containerd[1439]: time="2025-01-17T11:59:31.415864200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:31.416901 containerd[1439]: time="2025-01-17T11:59:31.416619320Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=29381104" Jan 17 11:59:31.417726 containerd[1439]: time="2025-01-17T11:59:31.417660480Z" level=info msg="ImageCreate event name:\"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:31.422711 containerd[1439]: time="2025-01-17T11:59:31.421061520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:31.422711 containerd[1439]: time="2025-01-17T11:59:31.422179920Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"30784892\" in 1.37879104s" Jan 17 11:59:31.422711 containerd[1439]: time="2025-01-17T11:59:31.422208360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\"" Jan 17 11:59:31.442223 containerd[1439]: time="2025-01-17T11:59:31.442188400Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 11:59:32.273354 containerd[1439]: time="2025-01-17T11:59:32.273301120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:32.273812 containerd[1439]: time="2025-01-17T11:59:32.273770920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=15765674" Jan 17 11:59:32.274692 containerd[1439]: time="2025-01-17T11:59:32.274661720Z" level=info msg="ImageCreate event name:\"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:32.277558 containerd[1439]: time="2025-01-17T11:59:32.277518760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:32.278983 containerd[1439]: time="2025-01-17T11:59:32.278938560Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"17169480\" in 836.71064ms" Jan 17 11:59:32.278983 containerd[1439]: time="2025-01-17T11:59:32.278978560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\"" Jan 17 11:59:32.297211 containerd[1439]: time="2025-01-17T11:59:32.297179600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 11:59:32.313132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 11:59:32.322066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:32.407504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:32.411212 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:59:32.509369 kubelet[1883]: E0117 11:59:32.509274 1883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:59:32.512880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:59:32.513062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:59:33.281447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456455227.mount: Deactivated successfully. Jan 17 11:59:33.718995 containerd[1439]: time="2025-01-17T11:59:33.718827280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:33.719912 containerd[1439]: time="2025-01-17T11:59:33.719760640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=25274684" Jan 17 11:59:33.720735 containerd[1439]: time="2025-01-17T11:59:33.720665160Z" level=info msg="ImageCreate event name:\"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:33.722563 containerd[1439]: time="2025-01-17T11:59:33.722519880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:33.723216 containerd[1439]: time="2025-01-17T11:59:33.723128800Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"25273701\" in 1.42591128s" Jan 17 11:59:33.723216 containerd[1439]: time="2025-01-17T11:59:33.723161400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\"" Jan 17 11:59:33.740918 containerd[1439]: time="2025-01-17T11:59:33.740868760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 11:59:34.292966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295402397.mount: Deactivated successfully. Jan 17 11:59:34.833131 containerd[1439]: time="2025-01-17T11:59:34.832977520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:34.834032 containerd[1439]: time="2025-01-17T11:59:34.833804880Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 17 11:59:34.834835 containerd[1439]: time="2025-01-17T11:59:34.834769440Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:34.837725 containerd[1439]: time="2025-01-17T11:59:34.837695400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:34.839086 containerd[1439]: time="2025-01-17T11:59:34.839022760Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.09810024s" Jan 17 11:59:34.839086 containerd[1439]: time="2025-01-17T11:59:34.839058360Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 11:59:34.857849 containerd[1439]: time="2025-01-17T11:59:34.857819360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 11:59:35.274497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284425644.mount: Deactivated successfully. Jan 17 11:59:35.278902 containerd[1439]: time="2025-01-17T11:59:35.278849000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:35.279321 containerd[1439]: time="2025-01-17T11:59:35.279282760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 17 11:59:35.280333 containerd[1439]: time="2025-01-17T11:59:35.280295480Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:35.283092 containerd[1439]: time="2025-01-17T11:59:35.283055240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:35.283773 containerd[1439]: time="2025-01-17T11:59:35.283629240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 425.7762ms" Jan 17 11:59:35.283773 containerd[1439]: time="2025-01-17T11:59:35.283662800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 11:59:35.301306 containerd[1439]: time="2025-01-17T11:59:35.301278680Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 11:59:35.824594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986587762.mount: Deactivated successfully. Jan 17 11:59:37.078925 containerd[1439]: time="2025-01-17T11:59:37.078856000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:37.079347 containerd[1439]: time="2025-01-17T11:59:37.079317920Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 17 11:59:37.080338 containerd[1439]: time="2025-01-17T11:59:37.080288840Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:37.083479 containerd[1439]: time="2025-01-17T11:59:37.083427360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:59:37.084972 containerd[1439]: time="2025-01-17T11:59:37.084932800Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.78362204s" Jan 17 11:59:37.085022 containerd[1439]: time="2025-01-17T11:59:37.084972360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 17 11:59:42.501811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:42.511138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:42.526816 systemd[1]: Reloading requested from client PID 2093 ('systemctl') (unit session-7.scope)... Jan 17 11:59:42.526836 systemd[1]: Reloading... Jan 17 11:59:42.594926 zram_generator::config[2132]: No configuration found. Jan 17 11:59:42.746922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:42.798848 systemd[1]: Reloading finished in 271 ms. Jan 17 11:59:42.839054 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 11:59:42.839121 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 11:59:42.839388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:42.841584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:42.927072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:42.931951 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 11:59:42.971520 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:42.971520 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 11:59:42.971520 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:42.971837 kubelet[2178]: I0117 11:59:42.971566 2178 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 11:59:44.499298 kubelet[2178]: I0117 11:59:44.499256 2178 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 11:59:44.499298 kubelet[2178]: I0117 11:59:44.499292 2178 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 11:59:44.499635 kubelet[2178]: I0117 11:59:44.499516 2178 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 11:59:44.521724 kubelet[2178]: I0117 11:59:44.521587 2178 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 11:59:44.522074 kubelet[2178]: E0117 11:59:44.522054 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.530569 kubelet[2178]: I0117 11:59:44.530547 2178 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 11:59:44.531488 kubelet[2178]: I0117 11:59:44.531452 2178 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 11:59:44.531656 kubelet[2178]: I0117 11:59:44.531632 2178 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 11:59:44.531656 kubelet[2178]: I0117 11:59:44.531654 2178 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 11:59:44.531756 kubelet[2178]: I0117 11:59:44.531663 2178 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 11:59:44.532713 kubelet[2178]: I0117 11:59:44.532690 2178 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:44.534762 kubelet[2178]: I0117 11:59:44.534723 2178 kubelet.go:396] "Attempting to sync node with API server" Jan 17 11:59:44.534762 kubelet[2178]: I0117 11:59:44.534746 2178 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 11:59:44.535737 kubelet[2178]: I0117 11:59:44.535093 2178 kubelet.go:312] "Adding apiserver pod source" Jan 17 11:59:44.535737 kubelet[2178]: I0117 11:59:44.535115 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 11:59:44.535737 kubelet[2178]: W0117 11:59:44.535163 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.535737 kubelet[2178]: E0117 11:59:44.535220 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.535737 kubelet[2178]: W0117 11:59:44.535670 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.535737 kubelet[2178]: E0117 11:59:44.535702 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.536771 kubelet[2178]: I0117 11:59:44.536624 2178 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 11:59:44.537155 kubelet[2178]: I0117 11:59:44.537130 2178 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 11:59:44.537801 kubelet[2178]: W0117 11:59:44.537769 2178 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 11:59:44.538922 kubelet[2178]: I0117 11:59:44.538608 2178 server.go:1256] "Started kubelet" Jan 17 11:59:44.538922 kubelet[2178]: I0117 11:59:44.538709 2178 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 11:59:44.539877 kubelet[2178]: I0117 11:59:44.539465 2178 server.go:461] "Adding debug handlers to kubelet server" Jan 17 11:59:44.544470 kubelet[2178]: I0117 11:59:44.544367 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 11:59:44.544933 kubelet[2178]: I0117 11:59:44.544610 2178 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 11:59:44.546644 kubelet[2178]: I0117 11:59:44.546621 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 11:59:44.547305 kubelet[2178]: E0117 11:59:44.547286 2178 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b790e845c1130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 11:59:44.53858744 +0000 UTC m=+1.603288081,LastTimestamp:2025-01-17 11:59:44.53858744 +0000 UTC m=+1.603288081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 11:59:44.548138 kubelet[2178]: I0117 11:59:44.547336 2178 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 11:59:44.548138 kubelet[2178]: I0117 11:59:44.547362 2178 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 11:59:44.548138 kubelet[2178]: I0117 11:59:44.547708 2178 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 11:59:44.548138 kubelet[2178]: W0117 11:59:44.547760 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.548138 kubelet[2178]: E0117 11:59:44.547804 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.548138 kubelet[2178]: E0117 11:59:44.547973 2178 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 11:59:44.548138 kubelet[2178]: E0117 11:59:44.547980 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Jan 17 11:59:44.548602 kubelet[2178]: I0117 11:59:44.548583 2178 factory.go:221] Registration of the systemd container factory successfully Jan 17 11:59:44.548702 kubelet[2178]: I0117 11:59:44.548680 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 11:59:44.549740 kubelet[2178]: I0117 11:59:44.549715 2178 factory.go:221] Registration of the containerd container factory successfully Jan 17 11:59:44.560100 kubelet[2178]: I0117 11:59:44.560061 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 11:59:44.561418 kubelet[2178]: I0117 11:59:44.561391 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 11:59:44.561418 kubelet[2178]: I0117 11:59:44.561409 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 11:59:44.561493 kubelet[2178]: I0117 11:59:44.561426 2178 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 11:59:44.561493 kubelet[2178]: E0117 11:59:44.561470 2178 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 11:59:44.564527 kubelet[2178]: W0117 11:59:44.564487 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.564527 kubelet[2178]: E0117 11:59:44.564524 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:44.565261 kubelet[2178]: I0117 11:59:44.564970 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 11:59:44.565261 kubelet[2178]: I0117 11:59:44.564984 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 11:59:44.565261 kubelet[2178]: I0117 11:59:44.565001 2178 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:44.622837 kubelet[2178]: I0117 11:59:44.622791 2178 policy_none.go:49] "None policy: Start" Jan 17 11:59:44.623649 kubelet[2178]: I0117 11:59:44.623609 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 11:59:44.623709 kubelet[2178]: I0117 11:59:44.623659 2178 state_mem.go:35] "Initializing new in-memory state store" Jan 17 11:59:44.629818 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 11:59:44.646322 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 11:59:44.648730 kubelet[2178]: I0117 11:59:44.648700 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:59:44.649319 kubelet[2178]: E0117 11:59:44.649194 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jan 17 11:59:44.649724 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 11:59:44.661362 kubelet[2178]: I0117 11:59:44.660802 2178 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 11:59:44.661362 kubelet[2178]: I0117 11:59:44.661097 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 11:59:44.661719 kubelet[2178]: I0117 11:59:44.661544 2178 topology_manager.go:215] "Topology Admit Handler" podUID="5531624bd53b2461316d3470f6536a34" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 11:59:44.662561 kubelet[2178]: I0117 11:59:44.662496 2178 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 11:59:44.663784 kubelet[2178]: E0117 11:59:44.663539 2178 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 11:59:44.663784 kubelet[2178]: I0117 11:59:44.663668 2178 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 11:59:44.669260 systemd[1]: Created slice kubepods-burstable-pod5531624bd53b2461316d3470f6536a34.slice - libcontainer container kubepods-burstable-pod5531624bd53b2461316d3470f6536a34.slice. Jan 17 11:59:44.686277 systemd[1]: Created slice kubepods-burstable-poddd466de870bdf0e573d7965dbd759acf.slice - libcontainer container kubepods-burstable-poddd466de870bdf0e573d7965dbd759acf.slice. Jan 17 11:59:44.698958 systemd[1]: Created slice kubepods-burstable-pod605dd245551545e29d4e79fb03fd341e.slice - libcontainer container kubepods-burstable-pod605dd245551545e29d4e79fb03fd341e.slice. Jan 17 11:59:44.748904 kubelet[2178]: E0117 11:59:44.748850 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Jan 17 11:59:44.850056 kubelet[2178]: I0117 11:59:44.849358 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:44.850056 kubelet[2178]: I0117 11:59:44.849413 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:44.850056 kubelet[2178]: I0117 11:59:44.849438 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:44.850056 kubelet[2178]: I0117 11:59:44.849459 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 11:59:44.850056 kubelet[2178]: I0117 11:59:44.849482 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5531624bd53b2461316d3470f6536a34-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5531624bd53b2461316d3470f6536a34\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:44.850239 kubelet[2178]: I0117 11:59:44.849523 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5531624bd53b2461316d3470f6536a34-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5531624bd53b2461316d3470f6536a34\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:44.850239 kubelet[2178]: I0117 11:59:44.849556 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5531624bd53b2461316d3470f6536a34-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5531624bd53b2461316d3470f6536a34\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:44.850239 kubelet[2178]: I0117 11:59:44.849575 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:44.850239 kubelet[2178]: I0117 11:59:44.849596 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:44.850772 kubelet[2178]: I0117 11:59:44.850727 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:59:44.851790 kubelet[2178]: E0117 11:59:44.851752 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jan 17 11:59:44.984772 kubelet[2178]: E0117 11:59:44.984735 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:44.985453 containerd[1439]: time="2025-01-17T11:59:44.985345800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5531624bd53b2461316d3470f6536a34,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:44.988631 kubelet[2178]: E0117 11:59:44.988604 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:44.988969 containerd[1439]: time="2025-01-17T11:59:44.988939440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:45.001451 kubelet[2178]: E0117 11:59:45.001429 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:45.001759 containerd[1439]: time="2025-01-17T11:59:45.001727960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,}" Jan 17 11:59:45.150028 kubelet[2178]: E0117 11:59:45.149924 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Jan 17 11:59:45.256068 kubelet[2178]: I0117 11:59:45.256038 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:59:45.256527 kubelet[2178]: E0117 11:59:45.256499 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jan 17 11:59:45.367455 kubelet[2178]: W0117 11:59:45.367367 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.367455 kubelet[2178]: E0117 11:59:45.367431 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.438552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188131842.mount: Deactivated successfully. Jan 17 11:59:45.443759 containerd[1439]: time="2025-01-17T11:59:45.443717720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:45.444861 containerd[1439]: time="2025-01-17T11:59:45.444809440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:45.445652 containerd[1439]: time="2025-01-17T11:59:45.445625720Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:45.445907 containerd[1439]: time="2025-01-17T11:59:45.445852680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 11:59:45.446521 containerd[1439]: time="2025-01-17T11:59:45.446310320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 17 11:59:45.446927 containerd[1439]: time="2025-01-17T11:59:45.446903720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 11:59:45.447670 containerd[1439]: time="2025-01-17T11:59:45.447614360Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:45.451837 containerd[1439]: time="2025-01-17T11:59:45.451785720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:59:45.452772 containerd[1439]: time="2025-01-17T11:59:45.452742560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 463.7434ms" Jan 17 11:59:45.453485 containerd[1439]: time="2025-01-17T11:59:45.453462400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 468.03588ms" Jan 17 11:59:45.455997 containerd[1439]: time="2025-01-17T11:59:45.455942800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 454.1594ms" Jan 17 11:59:45.565051 kubelet[2178]: W0117 11:59:45.564992 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.565051 kubelet[2178]: E0117 11:59:45.565051 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.588142 containerd[1439]: time="2025-01-17T11:59:45.587880880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:45.588142 containerd[1439]: time="2025-01-17T11:59:45.588061080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:45.588142 containerd[1439]: time="2025-01-17T11:59:45.588072360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:45.588282 containerd[1439]: time="2025-01-17T11:59:45.588186840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:45.588444 containerd[1439]: time="2025-01-17T11:59:45.588367760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:45.588444 containerd[1439]: time="2025-01-17T11:59:45.588418920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:45.588531 containerd[1439]: time="2025-01-17T11:59:45.588435520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:45.588531 containerd[1439]: time="2025-01-17T11:59:45.588506600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:45.588972 containerd[1439]: time="2025-01-17T11:59:45.588656240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:59:45.589097 containerd[1439]: time="2025-01-17T11:59:45.589056160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:59:45.589634 containerd[1439]: time="2025-01-17T11:59:45.589596600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:45.589783 containerd[1439]: time="2025-01-17T11:59:45.589753480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:59:45.611054 systemd[1]: Started cri-containerd-08e99154cf59454a7ff1390fd4ebb5a935b63685f0b198bd875ccc49e9bb8401.scope - libcontainer container 08e99154cf59454a7ff1390fd4ebb5a935b63685f0b198bd875ccc49e9bb8401. Jan 17 11:59:45.612085 systemd[1]: Started cri-containerd-54a7ce7d937c3804bfa4cf8f665eedbd59b963b5d957f95ae0d64c7e41c49a90.scope - libcontainer container 54a7ce7d937c3804bfa4cf8f665eedbd59b963b5d957f95ae0d64c7e41c49a90. Jan 17 11:59:45.613055 systemd[1]: Started cri-containerd-7b71a1e6eaabba7d0ac4b4bd9e6600e4b830d81a3628892ba53944e238657f66.scope - libcontainer container 7b71a1e6eaabba7d0ac4b4bd9e6600e4b830d81a3628892ba53944e238657f66. Jan 17 11:59:45.648624 containerd[1439]: time="2025-01-17T11:59:45.648372520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"08e99154cf59454a7ff1390fd4ebb5a935b63685f0b198bd875ccc49e9bb8401\"" Jan 17 11:59:45.648624 containerd[1439]: time="2025-01-17T11:59:45.648485080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5531624bd53b2461316d3470f6536a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"54a7ce7d937c3804bfa4cf8f665eedbd59b963b5d957f95ae0d64c7e41c49a90\"" Jan 17 11:59:45.649649 kubelet[2178]: E0117 11:59:45.649571 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:45.649649 kubelet[2178]: E0117 11:59:45.649603 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:45.651294 containerd[1439]: time="2025-01-17T11:59:45.651264960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b71a1e6eaabba7d0ac4b4bd9e6600e4b830d81a3628892ba53944e238657f66\"" Jan 17 11:59:45.652024 kubelet[2178]: E0117 11:59:45.652001 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:45.652900 containerd[1439]: time="2025-01-17T11:59:45.652156480Z" level=info msg="CreateContainer within sandbox \"08e99154cf59454a7ff1390fd4ebb5a935b63685f0b198bd875ccc49e9bb8401\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 11:59:45.652900 containerd[1439]: time="2025-01-17T11:59:45.652311440Z" level=info msg="CreateContainer within sandbox \"54a7ce7d937c3804bfa4cf8f665eedbd59b963b5d957f95ae0d64c7e41c49a90\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 11:59:45.654027 containerd[1439]: time="2025-01-17T11:59:45.653983240Z" level=info msg="CreateContainer within sandbox \"7b71a1e6eaabba7d0ac4b4bd9e6600e4b830d81a3628892ba53944e238657f66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 11:59:45.669219 containerd[1439]: time="2025-01-17T11:59:45.669176840Z" level=info msg="CreateContainer within sandbox \"54a7ce7d937c3804bfa4cf8f665eedbd59b963b5d957f95ae0d64c7e41c49a90\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"072dcd0b69589105cf77bc15910927da791233fbacf9ce5f129569f5369437c8\"" Jan 17 11:59:45.669863 containerd[1439]: time="2025-01-17T11:59:45.669835920Z" level=info msg="StartContainer for \"072dcd0b69589105cf77bc15910927da791233fbacf9ce5f129569f5369437c8\"" Jan 17 11:59:45.672110 containerd[1439]: time="2025-01-17T11:59:45.672050120Z" level=info msg="CreateContainer within sandbox \"08e99154cf59454a7ff1390fd4ebb5a935b63685f0b198bd875ccc49e9bb8401\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e9ce082ce486d989a3402d2b70794f8f6c2ea5d84a7a395f88b67c59c9d85e9\"" Jan 17 11:59:45.672485 containerd[1439]: time="2025-01-17T11:59:45.672455200Z" level=info msg="StartContainer for \"0e9ce082ce486d989a3402d2b70794f8f6c2ea5d84a7a395f88b67c59c9d85e9\"" Jan 17 11:59:45.674251 containerd[1439]: time="2025-01-17T11:59:45.674214400Z" level=info msg="CreateContainer within sandbox \"7b71a1e6eaabba7d0ac4b4bd9e6600e4b830d81a3628892ba53944e238657f66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5fde567bcb682a9972a2e4e123ca1bbc18e536b597168ee7bbb07528414951a1\"" Jan 17 11:59:45.674914 containerd[1439]: time="2025-01-17T11:59:45.674643040Z" level=info msg="StartContainer for \"5fde567bcb682a9972a2e4e123ca1bbc18e536b597168ee7bbb07528414951a1\"" Jan 17 11:59:45.705030 systemd[1]: Started cri-containerd-072dcd0b69589105cf77bc15910927da791233fbacf9ce5f129569f5369437c8.scope - libcontainer container 072dcd0b69589105cf77bc15910927da791233fbacf9ce5f129569f5369437c8. Jan 17 11:59:45.708868 systemd[1]: Started cri-containerd-0e9ce082ce486d989a3402d2b70794f8f6c2ea5d84a7a395f88b67c59c9d85e9.scope - libcontainer container 0e9ce082ce486d989a3402d2b70794f8f6c2ea5d84a7a395f88b67c59c9d85e9. Jan 17 11:59:45.709706 systemd[1]: Started cri-containerd-5fde567bcb682a9972a2e4e123ca1bbc18e536b597168ee7bbb07528414951a1.scope - libcontainer container 5fde567bcb682a9972a2e4e123ca1bbc18e536b597168ee7bbb07528414951a1. Jan 17 11:59:45.739006 containerd[1439]: time="2025-01-17T11:59:45.738939880Z" level=info msg="StartContainer for \"072dcd0b69589105cf77bc15910927da791233fbacf9ce5f129569f5369437c8\" returns successfully" Jan 17 11:59:45.748987 containerd[1439]: time="2025-01-17T11:59:45.748942920Z" level=info msg="StartContainer for \"0e9ce082ce486d989a3402d2b70794f8f6c2ea5d84a7a395f88b67c59c9d85e9\" returns successfully" Jan 17 11:59:45.767732 containerd[1439]: time="2025-01-17T11:59:45.767680480Z" level=info msg="StartContainer for \"5fde567bcb682a9972a2e4e123ca1bbc18e536b597168ee7bbb07528414951a1\" returns successfully" Jan 17 11:59:45.808083 kubelet[2178]: W0117 11:59:45.806563 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.808083 kubelet[2178]: E0117 11:59:45.806626 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.840220 kubelet[2178]: W0117 11:59:45.840107 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:45.840220 kubelet[2178]: E0117 11:59:45.840167 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 17 11:59:46.059449 kubelet[2178]: I0117 11:59:46.059063 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:59:46.571643 kubelet[2178]: E0117 11:59:46.571578 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:46.577842 kubelet[2178]: E0117 11:59:46.577508 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:46.578967 kubelet[2178]: E0117 11:59:46.578839 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:47.176845 kubelet[2178]: E0117 11:59:47.176804 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 11:59:47.269961 kubelet[2178]: I0117 11:59:47.269915 2178 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 11:59:47.292813 kubelet[2178]: E0117 11:59:47.292779 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.393258 kubelet[2178]: E0117 11:59:47.393211 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.494220 kubelet[2178]: E0117 11:59:47.493958 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.581313 kubelet[2178]: E0117 11:59:47.581266 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:47.595085 kubelet[2178]: E0117 11:59:47.595053 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.695670 kubelet[2178]: E0117 11:59:47.695635 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.796603 kubelet[2178]: E0117 11:59:47.796314 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.896869 kubelet[2178]: E0117 11:59:47.896831 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:47.997534 kubelet[2178]: E0117 11:59:47.997434 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:48.099770 kubelet[2178]: E0117 11:59:48.098522 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:48.199143 kubelet[2178]: E0117 11:59:48.199097 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:48.300066 kubelet[2178]: E0117 11:59:48.299978 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:48.400629 kubelet[2178]: E0117 11:59:48.400500 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:48.501138 kubelet[2178]: E0117 11:59:48.501095 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:59:49.537675 kubelet[2178]: I0117 11:59:49.537606 2178 apiserver.go:52] "Watching apiserver" Jan 17 11:59:49.547988 kubelet[2178]: I0117 11:59:49.547956 2178 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 11:59:50.069012 systemd[1]: Reloading requested from client PID 2451 ('systemctl') (unit session-7.scope)... Jan 17 11:59:50.069026 systemd[1]: Reloading... Jan 17 11:59:50.129916 zram_generator::config[2490]: No configuration found. Jan 17 11:59:50.250474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:59:50.313937 systemd[1]: Reloading finished in 244 ms. Jan 17 11:59:50.347361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:50.358827 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 11:59:50.359969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:50.360030 systemd[1]: kubelet.service: Consumed 1.937s CPU time, 112.2M memory peak, 0B memory swap peak. Jan 17 11:59:50.367165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:59:50.458601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:59:50.462560 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 11:59:50.510973 kubelet[2532]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:50.510973 kubelet[2532]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 11:59:50.510973 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:59:50.511940 kubelet[2532]: I0117 11:59:50.511791 2532 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 11:59:50.516666 kubelet[2532]: I0117 11:59:50.516635 2532 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 11:59:50.516666 kubelet[2532]: I0117 11:59:50.516661 2532 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 11:59:50.516865 kubelet[2532]: I0117 11:59:50.516842 2532 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 11:59:50.520402 kubelet[2532]: I0117 11:59:50.520368 2532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 11:59:50.524831 kubelet[2532]: I0117 11:59:50.524799 2532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 11:59:50.531174 kubelet[2532]: I0117 11:59:50.531146 2532 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 11:59:50.531380 kubelet[2532]: I0117 11:59:50.531366 2532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 11:59:50.531547 kubelet[2532]: I0117 11:59:50.531521 2532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 11:59:50.531547 kubelet[2532]: I0117 11:59:50.531544 2532 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 11:59:50.531640 kubelet[2532]: I0117 11:59:50.531552 2532 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 11:59:50.531640 kubelet[2532]: I0117 11:59:50.531580 2532 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:50.531699 kubelet[2532]: I0117 11:59:50.531673 2532 kubelet.go:396] "Attempting to sync node with API server" Jan 17 11:59:50.531699 kubelet[2532]: I0117 11:59:50.531687 2532 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 11:59:50.531742 kubelet[2532]: I0117 11:59:50.531705 2532 kubelet.go:312] "Adding apiserver pod source" Jan 17 11:59:50.531742 kubelet[2532]: I0117 11:59:50.531716 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 11:59:50.535907 kubelet[2532]: I0117 11:59:50.532426 2532 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 11:59:50.536032 kubelet[2532]: I0117 11:59:50.536014 2532 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 11:59:50.536509 kubelet[2532]: I0117 11:59:50.536489 2532 server.go:1256] "Started kubelet" Jan 17 11:59:50.536617 kubelet[2532]: I0117 11:59:50.536606 2532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 11:59:50.537061 kubelet[2532]: I0117 11:59:50.537038 2532 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 11:59:50.537429 sudo[2547]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 11:59:50.537697 sudo[2547]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 11:59:50.540712 kubelet[2532]: I0117 11:59:50.540695 2532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 11:59:50.543892 kubelet[2532]: I0117 11:59:50.537423 2532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 11:59:50.544461 kubelet[2532]: I0117 11:59:50.537970 2532 server.go:461] "Adding debug handlers to kubelet server" Jan 17 11:59:50.547233 kubelet[2532]: I0117 11:59:50.546812 2532 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 11:59:50.547233 kubelet[2532]: I0117 11:59:50.547168 2532 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 11:59:50.547585 kubelet[2532]: I0117 11:59:50.547562 2532 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 11:59:50.553301 kubelet[2532]: E0117 11:59:50.553281 2532 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 11:59:50.554043 kubelet[2532]: I0117 11:59:50.554017 2532 factory.go:221] Registration of the systemd container factory successfully Jan 17 11:59:50.554408 kubelet[2532]: I0117 11:59:50.554239 2532 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 11:59:50.556052 kubelet[2532]: I0117 11:59:50.555978 2532 factory.go:221] Registration of the containerd container factory successfully Jan 17 11:59:50.558807 kubelet[2532]: I0117 11:59:50.558787 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 11:59:50.560955 kubelet[2532]: I0117 11:59:50.560725 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 11:59:50.560955 kubelet[2532]: I0117 11:59:50.560753 2532 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 11:59:50.560955 kubelet[2532]: I0117 11:59:50.560770 2532 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 11:59:50.560955 kubelet[2532]: E0117 11:59:50.560821 2532 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 11:59:50.591534 kubelet[2532]: I0117 11:59:50.591503 2532 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 11:59:50.591534 kubelet[2532]: I0117 11:59:50.591527 2532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 11:59:50.591534 kubelet[2532]: I0117 11:59:50.591547 2532 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:59:50.591705 kubelet[2532]: I0117 11:59:50.591695 2532 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 11:59:50.591727 kubelet[2532]: I0117 11:59:50.591715 2532 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 11:59:50.591727 kubelet[2532]: I0117 11:59:50.591722 2532 policy_none.go:49] "None policy: Start" Jan 17 11:59:50.592606 kubelet[2532]: I0117 11:59:50.592585 2532 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 11:59:50.592678 kubelet[2532]: I0117 11:59:50.592618 2532 state_mem.go:35] "Initializing new in-memory state store" Jan 17 11:59:50.592797 kubelet[2532]: I0117 11:59:50.592779 2532 state_mem.go:75] "Updated machine memory state" Jan 17 11:59:50.597853 kubelet[2532]: I0117 11:59:50.597778 2532 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 11:59:50.598215 kubelet[2532]: I0117 11:59:50.598027 2532 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 11:59:50.649117 kubelet[2532]: I0117 11:59:50.649016 2532 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:59:50.656512 kubelet[2532]: I0117 11:59:50.656228 2532 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 11:59:50.656512 kubelet[2532]: I0117 11:59:50.656314 2532 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 11:59:50.661867 kubelet[2532]: I0117 11:59:50.661807 2532 topology_manager.go:215] "Topology Admit Handler" podUID="5531624bd53b2461316d3470f6536a34" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 11:59:50.662119 kubelet[2532]: I0117 11:59:50.662103 2532 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 11:59:50.662972 kubelet[2532]: I0117 11:59:50.662289 2532 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 11:59:50.748666 kubelet[2532]: I0117 11:59:50.748633 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:50.748666 kubelet[2532]: I0117 11:59:50.748674 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:50.748828 kubelet[2532]: I0117 11:59:50.748700 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:50.748828 kubelet[2532]: I0117 11:59:50.748721 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 11:59:50.748828 kubelet[2532]: I0117 11:59:50.748740 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5531624bd53b2461316d3470f6536a34-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5531624bd53b2461316d3470f6536a34\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:50.748828 kubelet[2532]: I0117 11:59:50.748757 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5531624bd53b2461316d3470f6536a34-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5531624bd53b2461316d3470f6536a34\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:50.748828 kubelet[2532]: I0117 11:59:50.748792 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:50.748971 kubelet[2532]: I0117 11:59:50.748814 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:50.748971 kubelet[2532]: I0117 11:59:50.748834 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5531624bd53b2461316d3470f6536a34-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5531624bd53b2461316d3470f6536a34\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:50.971534 kubelet[2532]: E0117 11:59:50.971418 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:50.972385 kubelet[2532]: E0117 11:59:50.972251 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:50.972385 kubelet[2532]: E0117 11:59:50.972351 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:50.992850 sudo[2547]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:51.533416 kubelet[2532]: I0117 11:59:51.533021 2532 apiserver.go:52] "Watching apiserver" Jan 17 11:59:51.547777 kubelet[2532]: I0117 11:59:51.547391 2532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 11:59:51.580033 kubelet[2532]: E0117 11:59:51.579978 2532 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 11:59:51.580307 kubelet[2532]: E0117 11:59:51.580280 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:51.583908 kubelet[2532]: E0117 11:59:51.580603 2532 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 11:59:51.583908 kubelet[2532]: E0117 11:59:51.580858 2532 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 11:59:51.583908 kubelet[2532]: E0117 11:59:51.580994 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:51.583908 kubelet[2532]: E0117 11:59:51.581276 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:51.611174 kubelet[2532]: I0117 11:59:51.611132 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6110891999999999 podStartE2EDuration="1.6110892s" podCreationTimestamp="2025-01-17 11:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:51.60529408 +0000 UTC m=+1.139564801" watchObservedRunningTime="2025-01-17 11:59:51.6110892 +0000 UTC m=+1.145359921" Jan 17 11:59:51.623156 kubelet[2532]: I0117 11:59:51.623108 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6230708 podStartE2EDuration="1.6230708s" podCreationTimestamp="2025-01-17 11:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:51.62272788 +0000 UTC m=+1.156998561" watchObservedRunningTime="2025-01-17 11:59:51.6230708 +0000 UTC m=+1.157341521" Jan 17 11:59:51.639270 kubelet[2532]: I0117 11:59:51.639192 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.639154 podStartE2EDuration="1.639154s" podCreationTimestamp="2025-01-17 11:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:59:51.63069812 +0000 UTC m=+1.164968841" watchObservedRunningTime="2025-01-17 11:59:51.639154 +0000 UTC m=+1.173424721" Jan 17 11:59:52.577765 kubelet[2532]: E0117 11:59:52.575615 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:52.577765 kubelet[2532]: E0117 11:59:52.576255 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:52.577765 kubelet[2532]: E0117 11:59:52.576689 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:53.634504 kubelet[2532]: E0117 11:59:53.634461 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:54.008488 sudo[1624]: pam_unix(sudo:session): session closed for user root Jan 17 11:59:54.014162 sshd[1621]: pam_unix(sshd:session): session closed for user core Jan 17 11:59:54.017999 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:36724.service: Deactivated successfully. Jan 17 11:59:54.019562 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 11:59:54.020849 systemd[1]: session-7.scope: Consumed 9.196s CPU time, 185.0M memory peak, 0B memory swap peak. Jan 17 11:59:54.021303 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Jan 17 11:59:54.022327 systemd-logind[1426]: Removed session 7. Jan 17 11:59:58.087065 kubelet[2532]: E0117 11:59:58.087033 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:58.587154 kubelet[2532]: E0117 11:59:58.587066 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:59.338604 kubelet[2532]: E0117 11:59:59.338575 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:59:59.588873 kubelet[2532]: E0117 11:59:59.588748 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:03.642743 kubelet[2532]: E0117 12:00:03.642499 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:04.482522 kubelet[2532]: I0117 12:00:04.482484 2532 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:00:04.482874 containerd[1439]: time="2025-01-17T12:00:04.482824924Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:00:04.483336 kubelet[2532]: I0117 12:00:04.483314 2532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:00:04.544021 update_engine[1431]: I20250117 12:00:04.543927 1431 update_attempter.cc:509] Updating boot flags... Jan 17 12:00:04.589985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2621) Jan 17 12:00:04.620911 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2624) Jan 17 12:00:05.371022 kubelet[2532]: I0117 12:00:05.368925 2532 topology_manager.go:215] "Topology Admit Handler" podUID="be903a47-9bbc-4b12-b237-a33abe10510f" podNamespace="kube-system" podName="kube-proxy-878zj" Jan 17 12:00:05.379089 kubelet[2532]: I0117 12:00:05.376363 2532 topology_manager.go:215] "Topology Admit Handler" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" podNamespace="kube-system" podName="cilium-pgjcw" Jan 17 12:00:05.392529 systemd[1]: Created slice kubepods-besteffort-podbe903a47_9bbc_4b12_b237_a33abe10510f.slice - libcontainer container kubepods-besteffort-podbe903a47_9bbc_4b12_b237_a33abe10510f.slice. Jan 17 12:00:05.411273 systemd[1]: Created slice kubepods-burstable-podd42eb1f6_ca8e_4d80_8e73_fa5046babc27.slice - libcontainer container kubepods-burstable-podd42eb1f6_ca8e_4d80_8e73_fa5046babc27.slice. Jan 17 12:00:05.445856 kubelet[2532]: I0117 12:00:05.445818 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-bpf-maps\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.446055 kubelet[2532]: I0117 12:00:05.446042 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-kernel\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.446152 kubelet[2532]: I0117 12:00:05.446140 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-lib-modules\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.447015 kubelet[2532]: I0117 12:00:05.446989 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-xtables-lock\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448688 kubelet[2532]: I0117 12:00:05.448069 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-config-path\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448688 kubelet[2532]: I0117 12:00:05.448117 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hubble-tls\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448688 kubelet[2532]: I0117 12:00:05.448137 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hostproc\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448688 kubelet[2532]: I0117 12:00:05.448198 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be903a47-9bbc-4b12-b237-a33abe10510f-lib-modules\") pod \"kube-proxy-878zj\" (UID: \"be903a47-9bbc-4b12-b237-a33abe10510f\") " pod="kube-system/kube-proxy-878zj" Jan 17 12:00:05.448688 kubelet[2532]: I0117 12:00:05.448232 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhc58\" (UniqueName: \"kubernetes.io/projected/be903a47-9bbc-4b12-b237-a33abe10510f-kube-api-access-zhc58\") pod \"kube-proxy-878zj\" (UID: \"be903a47-9bbc-4b12-b237-a33abe10510f\") " pod="kube-system/kube-proxy-878zj" Jan 17 12:00:05.448688 kubelet[2532]: I0117 12:00:05.448297 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-etc-cni-netd\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448998 kubelet[2532]: I0117 12:00:05.448336 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-clustermesh-secrets\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448998 kubelet[2532]: I0117 12:00:05.448356 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxt8\" (UniqueName: \"kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-kube-api-access-8bxt8\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448998 kubelet[2532]: I0117 12:00:05.448375 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be903a47-9bbc-4b12-b237-a33abe10510f-xtables-lock\") pod \"kube-proxy-878zj\" (UID: \"be903a47-9bbc-4b12-b237-a33abe10510f\") " pod="kube-system/kube-proxy-878zj" Jan 17 12:00:05.448998 kubelet[2532]: I0117 12:00:05.448396 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be903a47-9bbc-4b12-b237-a33abe10510f-kube-proxy\") pod \"kube-proxy-878zj\" (UID: \"be903a47-9bbc-4b12-b237-a33abe10510f\") " pod="kube-system/kube-proxy-878zj" Jan 17 12:00:05.448998 kubelet[2532]: I0117 12:00:05.448415 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-run\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.448998 kubelet[2532]: I0117 12:00:05.448458 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-cgroup\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.449135 kubelet[2532]: I0117 12:00:05.448513 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-net\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.449135 kubelet[2532]: I0117 12:00:05.448534 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cni-path\") pod \"cilium-pgjcw\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " pod="kube-system/cilium-pgjcw" Jan 17 12:00:05.560863 kubelet[2532]: E0117 12:00:05.560817 2532 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:00:05.560863 kubelet[2532]: E0117 12:00:05.560860 2532 projected.go:200] Error preparing data for projected volume kube-api-access-zhc58 for pod kube-system/kube-proxy-878zj: configmap "kube-root-ca.crt" not found Jan 17 12:00:05.561026 kubelet[2532]: E0117 12:00:05.560931 2532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/be903a47-9bbc-4b12-b237-a33abe10510f-kube-api-access-zhc58 podName:be903a47-9bbc-4b12-b237-a33abe10510f nodeName:}" failed. No retries permitted until 2025-01-17 12:00:06.060911375 +0000 UTC m=+15.595182056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc58" (UniqueName: "kubernetes.io/projected/be903a47-9bbc-4b12-b237-a33abe10510f-kube-api-access-zhc58") pod "kube-proxy-878zj" (UID: "be903a47-9bbc-4b12-b237-a33abe10510f") : configmap "kube-root-ca.crt" not found Jan 17 12:00:05.562138 kubelet[2532]: E0117 12:00:05.561995 2532 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:00:05.562138 kubelet[2532]: E0117 12:00:05.562020 2532 projected.go:200] Error preparing data for projected volume kube-api-access-8bxt8 for pod kube-system/cilium-pgjcw: configmap "kube-root-ca.crt" not found Jan 17 12:00:05.562138 kubelet[2532]: E0117 12:00:05.562105 2532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-kube-api-access-8bxt8 podName:d42eb1f6-ca8e-4d80-8e73-fa5046babc27 nodeName:}" failed. No retries permitted until 2025-01-17 12:00:06.062092261 +0000 UTC m=+15.596362942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8bxt8" (UniqueName: "kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-kube-api-access-8bxt8") pod "cilium-pgjcw" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27") : configmap "kube-root-ca.crt" not found Jan 17 12:00:05.717203 kubelet[2532]: I0117 12:00:05.717094 2532 topology_manager.go:215] "Topology Admit Handler" podUID="93e65307-93ed-4bb9-9ac1-1ec2214c78ab" podNamespace="kube-system" podName="cilium-operator-5cc964979-mssq2" Jan 17 12:00:05.729287 systemd[1]: Created slice kubepods-besteffort-pod93e65307_93ed_4bb9_9ac1_1ec2214c78ab.slice - libcontainer container kubepods-besteffort-pod93e65307_93ed_4bb9_9ac1_1ec2214c78ab.slice. Jan 17 12:00:05.750917 kubelet[2532]: I0117 12:00:05.750813 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-cilium-config-path\") pod \"cilium-operator-5cc964979-mssq2\" (UID: \"93e65307-93ed-4bb9-9ac1-1ec2214c78ab\") " pod="kube-system/cilium-operator-5cc964979-mssq2" Jan 17 12:00:05.750917 kubelet[2532]: I0117 12:00:05.750857 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h46j\" (UniqueName: \"kubernetes.io/projected/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-kube-api-access-2h46j\") pod \"cilium-operator-5cc964979-mssq2\" (UID: \"93e65307-93ed-4bb9-9ac1-1ec2214c78ab\") " pod="kube-system/cilium-operator-5cc964979-mssq2" Jan 17 12:00:06.035412 kubelet[2532]: E0117 12:00:06.035306 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:06.041523 containerd[1439]: time="2025-01-17T12:00:06.041152316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-mssq2,Uid:93e65307-93ed-4bb9-9ac1-1ec2214c78ab,Namespace:kube-system,Attempt:0,}" Jan 17 12:00:06.061575 containerd[1439]: time="2025-01-17T12:00:06.061485779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:06.061742 containerd[1439]: time="2025-01-17T12:00:06.061703460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:06.061963 containerd[1439]: time="2025-01-17T12:00:06.061783060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:06.061963 containerd[1439]: time="2025-01-17T12:00:06.061914421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:06.086050 systemd[1]: Started cri-containerd-159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da.scope - libcontainer container 159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da. Jan 17 12:00:06.115422 containerd[1439]: time="2025-01-17T12:00:06.115379412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-mssq2,Uid:93e65307-93ed-4bb9-9ac1-1ec2214c78ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da\"" Jan 17 12:00:06.121275 kubelet[2532]: E0117 12:00:06.121247 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:06.123153 containerd[1439]: time="2025-01-17T12:00:06.123094971Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:00:06.306169 kubelet[2532]: E0117 12:00:06.306045 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:06.306910 containerd[1439]: time="2025-01-17T12:00:06.306757341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-878zj,Uid:be903a47-9bbc-4b12-b237-a33abe10510f,Namespace:kube-system,Attempt:0,}" Jan 17 12:00:06.317047 kubelet[2532]: E0117 12:00:06.316615 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:06.317134 containerd[1439]: time="2025-01-17T12:00:06.317000993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgjcw,Uid:d42eb1f6-ca8e-4d80-8e73-fa5046babc27,Namespace:kube-system,Attempt:0,}" Jan 17 12:00:06.327406 containerd[1439]: time="2025-01-17T12:00:06.326960923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:06.327406 containerd[1439]: time="2025-01-17T12:00:06.327352645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:06.327406 containerd[1439]: time="2025-01-17T12:00:06.327381646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:06.329075 containerd[1439]: time="2025-01-17T12:00:06.329038814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:06.336517 containerd[1439]: time="2025-01-17T12:00:06.336440692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:06.336810 containerd[1439]: time="2025-01-17T12:00:06.336499172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:06.336810 containerd[1439]: time="2025-01-17T12:00:06.336510172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:06.336810 containerd[1439]: time="2025-01-17T12:00:06.336585692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:06.346109 systemd[1]: Started cri-containerd-f17d249113ec8fe75eaf96ef242adb8a5e38c09b9747c07c69adecb6bf8175a7.scope - libcontainer container f17d249113ec8fe75eaf96ef242adb8a5e38c09b9747c07c69adecb6bf8175a7. Jan 17 12:00:06.350011 systemd[1]: Started cri-containerd-3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689.scope - libcontainer container 3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689. Jan 17 12:00:06.373818 containerd[1439]: time="2025-01-17T12:00:06.373700240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-878zj,Uid:be903a47-9bbc-4b12-b237-a33abe10510f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f17d249113ec8fe75eaf96ef242adb8a5e38c09b9747c07c69adecb6bf8175a7\"" Jan 17 12:00:06.374489 kubelet[2532]: E0117 12:00:06.374468 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:06.378650 containerd[1439]: time="2025-01-17T12:00:06.378608385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgjcw,Uid:d42eb1f6-ca8e-4d80-8e73-fa5046babc27,Namespace:kube-system,Attempt:0,} returns sandbox id \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\"" Jan 17 12:00:06.379783 kubelet[2532]: E0117 12:00:06.379467 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:06.385009 containerd[1439]: time="2025-01-17T12:00:06.384973017Z" level=info msg="CreateContainer within sandbox \"f17d249113ec8fe75eaf96ef242adb8a5e38c09b9747c07c69adecb6bf8175a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:00:06.396837 containerd[1439]: time="2025-01-17T12:00:06.396789317Z" level=info msg="CreateContainer within sandbox \"f17d249113ec8fe75eaf96ef242adb8a5e38c09b9747c07c69adecb6bf8175a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"213568823e1d4c25fb6d21359195efa96869cc85c8d6080eb4f52cead11f3bcd\"" Jan 17 12:00:06.398732 containerd[1439]: time="2025-01-17T12:00:06.398703727Z" level=info msg="StartContainer for \"213568823e1d4c25fb6d21359195efa96869cc85c8d6080eb4f52cead11f3bcd\"" Jan 17 12:00:06.434075 systemd[1]: Started cri-containerd-213568823e1d4c25fb6d21359195efa96869cc85c8d6080eb4f52cead11f3bcd.scope - libcontainer container 213568823e1d4c25fb6d21359195efa96869cc85c8d6080eb4f52cead11f3bcd. Jan 17 12:00:06.468539 containerd[1439]: time="2025-01-17T12:00:06.468481360Z" level=info msg="StartContainer for \"213568823e1d4c25fb6d21359195efa96869cc85c8d6080eb4f52cead11f3bcd\" returns successfully" Jan 17 12:00:06.601073 kubelet[2532]: E0117 12:00:06.600851 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:13.467971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251878247.mount: Deactivated successfully. Jan 17 12:00:15.840954 containerd[1439]: time="2025-01-17T12:00:15.840897356Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:15.841810 containerd[1439]: time="2025-01-17T12:00:15.841769159Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138282" Jan 17 12:00:15.842560 containerd[1439]: time="2025-01-17T12:00:15.842534641Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:15.843806 containerd[1439]: time="2025-01-17T12:00:15.843773884Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 9.720641113s" Jan 17 12:00:15.843839 containerd[1439]: time="2025-01-17T12:00:15.843808725Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 12:00:15.849627 containerd[1439]: time="2025-01-17T12:00:15.849580541Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:00:15.855179 containerd[1439]: time="2025-01-17T12:00:15.855150917Z" level=info msg="CreateContainer within sandbox \"159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:00:15.867648 containerd[1439]: time="2025-01-17T12:00:15.867614232Z" level=info msg="CreateContainer within sandbox \"159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\"" Jan 17 12:00:15.868239 containerd[1439]: time="2025-01-17T12:00:15.868060473Z" level=info msg="StartContainer for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\"" Jan 17 12:00:15.895116 systemd[1]: Started cri-containerd-fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11.scope - libcontainer container fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11. Jan 17 12:00:15.918314 containerd[1439]: time="2025-01-17T12:00:15.918207295Z" level=info msg="StartContainer for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" returns successfully" Jan 17 12:00:16.622568 kubelet[2532]: E0117 12:00:16.622496 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:16.636413 kubelet[2532]: I0117 12:00:16.636371 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-878zj" podStartSLOduration=11.636335538 podStartE2EDuration="11.636335538s" podCreationTimestamp="2025-01-17 12:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:00:06.610067237 +0000 UTC m=+16.144337998" watchObservedRunningTime="2025-01-17 12:00:16.636335538 +0000 UTC m=+26.170606259" Jan 17 12:00:17.619939 kubelet[2532]: E0117 12:00:17.619765 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:22.094782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3499626205.mount: Deactivated successfully. Jan 17 12:00:23.027438 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:41842.service - OpenSSH per-connection server daemon (10.0.0.1:41842). Jan 17 12:00:23.086802 sshd[2993]: Accepted publickey for core from 10.0.0.1 port 41842 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:23.087335 sshd[2993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:23.092695 systemd-logind[1426]: New session 8 of user core. Jan 17 12:00:23.100271 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:00:23.253492 sshd[2993]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:23.257830 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:41842.service: Deactivated successfully. Jan 17 12:00:23.260000 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:00:23.261068 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:00:23.262411 systemd-logind[1426]: Removed session 8. Jan 17 12:00:26.202270 containerd[1439]: time="2025-01-17T12:00:26.202156022Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:26.203676 containerd[1439]: time="2025-01-17T12:00:26.203491423Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651554" Jan 17 12:00:26.204404 containerd[1439]: time="2025-01-17T12:00:26.204362225Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:00:26.206030 containerd[1439]: time="2025-01-17T12:00:26.205993267Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.356379806s" Jan 17 12:00:26.206030 containerd[1439]: time="2025-01-17T12:00:26.206028507Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 12:00:26.208932 containerd[1439]: time="2025-01-17T12:00:26.208787711Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:00:26.242799 containerd[1439]: time="2025-01-17T12:00:26.241079676Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\"" Jan 17 12:00:26.242799 containerd[1439]: time="2025-01-17T12:00:26.242047677Z" level=info msg="StartContainer for \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\"" Jan 17 12:00:26.280138 systemd[1]: Started cri-containerd-85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd.scope - libcontainer container 85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd. Jan 17 12:00:26.381760 containerd[1439]: time="2025-01-17T12:00:26.381644352Z" level=info msg="StartContainer for \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\" returns successfully" Jan 17 12:00:26.382607 systemd[1]: cri-containerd-85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd.scope: Deactivated successfully. Jan 17 12:00:26.413089 containerd[1439]: time="2025-01-17T12:00:26.408012588Z" level=info msg="shim disconnected" id=85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd namespace=k8s.io Jan 17 12:00:26.413089 containerd[1439]: time="2025-01-17T12:00:26.413085995Z" level=warning msg="cleaning up after shim disconnected" id=85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd namespace=k8s.io Jan 17 12:00:26.413089 containerd[1439]: time="2025-01-17T12:00:26.413101275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:00:26.655848 kubelet[2532]: E0117 12:00:26.655791 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:26.661066 containerd[1439]: time="2025-01-17T12:00:26.660878221Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:00:26.674000 containerd[1439]: time="2025-01-17T12:00:26.673914759Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\"" Jan 17 12:00:26.674596 containerd[1439]: time="2025-01-17T12:00:26.674570240Z" level=info msg="StartContainer for \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\"" Jan 17 12:00:26.688662 kubelet[2532]: I0117 12:00:26.688159 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-mssq2" podStartSLOduration=11.964575334 podStartE2EDuration="21.688096819s" podCreationTimestamp="2025-01-17 12:00:05 +0000 UTC" firstStartedPulling="2025-01-17 12:00:06.121798684 +0000 UTC m=+15.656069405" lastFinishedPulling="2025-01-17 12:00:15.845320169 +0000 UTC m=+25.379590890" observedRunningTime="2025-01-17 12:00:16.640226428 +0000 UTC m=+26.174497149" watchObservedRunningTime="2025-01-17 12:00:26.688096819 +0000 UTC m=+36.222367540" Jan 17 12:00:26.701062 systemd[1]: Started cri-containerd-2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967.scope - libcontainer container 2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967. Jan 17 12:00:26.722237 containerd[1439]: time="2025-01-17T12:00:26.722173466Z" level=info msg="StartContainer for \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\" returns successfully" Jan 17 12:00:26.742494 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:00:26.742705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:26.742774 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:00:26.749191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:00:26.749365 systemd[1]: cri-containerd-2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967.scope: Deactivated successfully. Jan 17 12:00:26.767352 containerd[1439]: time="2025-01-17T12:00:26.767290009Z" level=info msg="shim disconnected" id=2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967 namespace=k8s.io Jan 17 12:00:26.767352 containerd[1439]: time="2025-01-17T12:00:26.767350729Z" level=warning msg="cleaning up after shim disconnected" id=2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967 namespace=k8s.io Jan 17 12:00:26.767352 containerd[1439]: time="2025-01-17T12:00:26.767359809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:00:26.790648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:27.235940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd-rootfs.mount: Deactivated successfully. Jan 17 12:00:27.660248 kubelet[2532]: E0117 12:00:27.660220 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:27.662820 containerd[1439]: time="2025-01-17T12:00:27.662785879Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:00:27.682642 containerd[1439]: time="2025-01-17T12:00:27.682334904Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\"" Jan 17 12:00:27.683483 containerd[1439]: time="2025-01-17T12:00:27.683106865Z" level=info msg="StartContainer for \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\"" Jan 17 12:00:27.714373 systemd[1]: Started cri-containerd-113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710.scope - libcontainer container 113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710. Jan 17 12:00:27.743486 containerd[1439]: time="2025-01-17T12:00:27.743379504Z" level=info msg="StartContainer for \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\" returns successfully" Jan 17 12:00:27.763753 systemd[1]: cri-containerd-113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710.scope: Deactivated successfully. Jan 17 12:00:27.785947 containerd[1439]: time="2025-01-17T12:00:27.785872680Z" level=info msg="shim disconnected" id=113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710 namespace=k8s.io Jan 17 12:00:27.785947 containerd[1439]: time="2025-01-17T12:00:27.785945400Z" level=warning msg="cleaning up after shim disconnected" id=113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710 namespace=k8s.io Jan 17 12:00:27.785947 containerd[1439]: time="2025-01-17T12:00:27.785955160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:00:28.235485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710-rootfs.mount: Deactivated successfully. Jan 17 12:00:28.264325 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:41846.service - OpenSSH per-connection server daemon (10.0.0.1:41846). Jan 17 12:00:28.306518 sshd[3195]: Accepted publickey for core from 10.0.0.1 port 41846 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:28.307745 sshd[3195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:28.311877 systemd-logind[1426]: New session 9 of user core. Jan 17 12:00:28.321126 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:00:28.448222 sshd[3195]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:28.454998 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:41846.service: Deactivated successfully. Jan 17 12:00:28.456857 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:00:28.460012 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:00:28.463212 systemd-logind[1426]: Removed session 9. Jan 17 12:00:28.664085 kubelet[2532]: E0117 12:00:28.664059 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:28.667228 containerd[1439]: time="2025-01-17T12:00:28.667186736Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:00:28.692438 containerd[1439]: time="2025-01-17T12:00:28.692369927Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\"" Jan 17 12:00:28.692916 containerd[1439]: time="2025-01-17T12:00:28.692878888Z" level=info msg="StartContainer for \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\"" Jan 17 12:00:28.728116 systemd[1]: Started cri-containerd-8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9.scope - libcontainer container 8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9. Jan 17 12:00:28.750128 systemd[1]: cri-containerd-8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9.scope: Deactivated successfully. Jan 17 12:00:28.752492 containerd[1439]: time="2025-01-17T12:00:28.752436521Z" level=info msg="StartContainer for \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\" returns successfully" Jan 17 12:00:28.779150 containerd[1439]: time="2025-01-17T12:00:28.779062473Z" level=info msg="shim disconnected" id=8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9 namespace=k8s.io Jan 17 12:00:28.779150 containerd[1439]: time="2025-01-17T12:00:28.779120833Z" level=warning msg="cleaning up after shim disconnected" id=8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9 namespace=k8s.io Jan 17 12:00:28.779150 containerd[1439]: time="2025-01-17T12:00:28.779131833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:00:29.235758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9-rootfs.mount: Deactivated successfully. Jan 17 12:00:29.668817 kubelet[2532]: E0117 12:00:29.668763 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:29.673182 containerd[1439]: time="2025-01-17T12:00:29.671550995Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:00:29.695022 containerd[1439]: time="2025-01-17T12:00:29.694980102Z" level=info msg="CreateContainer within sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\"" Jan 17 12:00:29.697907 containerd[1439]: time="2025-01-17T12:00:29.697264184Z" level=info msg="StartContainer for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\"" Jan 17 12:00:29.728079 systemd[1]: Started cri-containerd-1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953.scope - libcontainer container 1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953. Jan 17 12:00:29.759543 containerd[1439]: time="2025-01-17T12:00:29.759487976Z" level=info msg="StartContainer for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" returns successfully" Jan 17 12:00:29.871032 kubelet[2532]: I0117 12:00:29.870989 2532 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:00:29.896780 kubelet[2532]: I0117 12:00:29.896737 2532 topology_manager.go:215] "Topology Admit Handler" podUID="da11b486-91ca-48fa-8f8c-164c5df647f2" podNamespace="kube-system" podName="coredns-76f75df574-khzd7" Jan 17 12:00:29.897081 kubelet[2532]: I0117 12:00:29.896921 2532 topology_manager.go:215] "Topology Admit Handler" podUID="cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8" podNamespace="kube-system" podName="coredns-76f75df574-r2wlx" Jan 17 12:00:29.915098 systemd[1]: Created slice kubepods-burstable-podcb05d32a_c1b9_46a8_ad6b_f1e60e92fbb8.slice - libcontainer container kubepods-burstable-podcb05d32a_c1b9_46a8_ad6b_f1e60e92fbb8.slice. Jan 17 12:00:29.919548 systemd[1]: Created slice kubepods-burstable-podda11b486_91ca_48fa_8f8c_164c5df647f2.slice - libcontainer container kubepods-burstable-podda11b486_91ca_48fa_8f8c_164c5df647f2.slice. Jan 17 12:00:29.936877 kubelet[2532]: I0117 12:00:29.936852 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpj28\" (UniqueName: \"kubernetes.io/projected/da11b486-91ca-48fa-8f8c-164c5df647f2-kube-api-access-qpj28\") pod \"coredns-76f75df574-khzd7\" (UID: \"da11b486-91ca-48fa-8f8c-164c5df647f2\") " pod="kube-system/coredns-76f75df574-khzd7" Jan 17 12:00:29.936877 kubelet[2532]: I0117 12:00:29.936904 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57pb8\" (UniqueName: \"kubernetes.io/projected/cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8-kube-api-access-57pb8\") pod \"coredns-76f75df574-r2wlx\" (UID: \"cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8\") " pod="kube-system/coredns-76f75df574-r2wlx" Jan 17 12:00:29.937084 kubelet[2532]: I0117 12:00:29.936976 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8-config-volume\") pod \"coredns-76f75df574-r2wlx\" (UID: \"cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8\") " pod="kube-system/coredns-76f75df574-r2wlx" Jan 17 12:00:29.937084 kubelet[2532]: I0117 12:00:29.937012 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da11b486-91ca-48fa-8f8c-164c5df647f2-config-volume\") pod \"coredns-76f75df574-khzd7\" (UID: \"da11b486-91ca-48fa-8f8c-164c5df647f2\") " pod="kube-system/coredns-76f75df574-khzd7" Jan 17 12:00:30.219827 kubelet[2532]: E0117 12:00:30.219429 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:30.222372 kubelet[2532]: E0117 12:00:30.222276 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:30.222609 containerd[1439]: time="2025-01-17T12:00:30.222496131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r2wlx,Uid:cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8,Namespace:kube-system,Attempt:0,}" Jan 17 12:00:30.223302 containerd[1439]: time="2025-01-17T12:00:30.223228612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-khzd7,Uid:da11b486-91ca-48fa-8f8c-164c5df647f2,Namespace:kube-system,Attempt:0,}" Jan 17 12:00:30.684986 kubelet[2532]: E0117 12:00:30.684945 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:30.696370 kubelet[2532]: I0117 12:00:30.696340 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pgjcw" podStartSLOduration=5.873942106 podStartE2EDuration="25.696288081s" podCreationTimestamp="2025-01-17 12:00:05 +0000 UTC" firstStartedPulling="2025-01-17 12:00:06.384009972 +0000 UTC m=+15.918280693" lastFinishedPulling="2025-01-17 12:00:26.206355947 +0000 UTC m=+35.740626668" observedRunningTime="2025-01-17 12:00:30.691784716 +0000 UTC m=+40.226055437" watchObservedRunningTime="2025-01-17 12:00:30.696288081 +0000 UTC m=+40.230558842" Jan 17 12:00:31.675381 kubelet[2532]: E0117 12:00:31.674953 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:31.984086 systemd-networkd[1390]: cilium_host: Link UP Jan 17 12:00:31.984329 systemd-networkd[1390]: cilium_net: Link UP Jan 17 12:00:31.984332 systemd-networkd[1390]: cilium_net: Gained carrier Jan 17 12:00:31.984487 systemd-networkd[1390]: cilium_host: Gained carrier Jan 17 12:00:31.986121 systemd-networkd[1390]: cilium_net: Gained IPv6LL Jan 17 12:00:31.989226 systemd-networkd[1390]: cilium_host: Gained IPv6LL Jan 17 12:00:32.075195 systemd-networkd[1390]: cilium_vxlan: Link UP Jan 17 12:00:32.075205 systemd-networkd[1390]: cilium_vxlan: Gained carrier Jan 17 12:00:32.375913 kernel: NET: Registered PF_ALG protocol family Jan 17 12:00:32.676052 kubelet[2532]: E0117 12:00:32.675877 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:32.940599 systemd-networkd[1390]: lxc_health: Link UP Jan 17 12:00:32.942869 systemd-networkd[1390]: lxc_health: Gained carrier Jan 17 12:00:33.337606 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Jan 17 12:00:33.354751 systemd-networkd[1390]: lxc73209d475cb1: Link UP Jan 17 12:00:33.362918 kernel: eth0: renamed from tmpe4598 Jan 17 12:00:33.376841 systemd-networkd[1390]: lxc986a91e9090d: Link UP Jan 17 12:00:33.377248 systemd-networkd[1390]: lxc73209d475cb1: Gained carrier Jan 17 12:00:33.379263 kernel: eth0: renamed from tmp705b0 Jan 17 12:00:33.382547 systemd-networkd[1390]: lxc986a91e9090d: Gained carrier Jan 17 12:00:33.464539 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:51488.service - OpenSSH per-connection server daemon (10.0.0.1:51488). Jan 17 12:00:33.503444 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 51488 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:33.504820 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:33.511155 systemd-logind[1426]: New session 10 of user core. Jan 17 12:00:33.517044 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:00:33.637917 sshd[3787]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:33.641803 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:51488.service: Deactivated successfully. Jan 17 12:00:33.643796 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:00:33.646570 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:00:33.647637 systemd-logind[1426]: Removed session 10. Jan 17 12:00:34.320128 kubelet[2532]: E0117 12:00:34.320075 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:34.679527 kubelet[2532]: E0117 12:00:34.679429 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:34.746054 systemd-networkd[1390]: lxc986a91e9090d: Gained IPv6LL Jan 17 12:00:34.873076 systemd-networkd[1390]: lxc73209d475cb1: Gained IPv6LL Jan 17 12:00:34.937092 systemd-networkd[1390]: lxc_health: Gained IPv6LL Jan 17 12:00:36.926486 containerd[1439]: time="2025-01-17T12:00:36.926396098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:36.927019 containerd[1439]: time="2025-01-17T12:00:36.926515378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:36.927019 containerd[1439]: time="2025-01-17T12:00:36.926548018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:36.927095 containerd[1439]: time="2025-01-17T12:00:36.927034658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:36.934914 containerd[1439]: time="2025-01-17T12:00:36.934273023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:00:36.934914 containerd[1439]: time="2025-01-17T12:00:36.934726464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:00:36.934914 containerd[1439]: time="2025-01-17T12:00:36.934738064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:36.934914 containerd[1439]: time="2025-01-17T12:00:36.934819424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:00:36.955069 systemd[1]: Started cri-containerd-705b0f628da5b99f8538cd3f3be736edeefc35ba6e6625bb889e6d553f432b90.scope - libcontainer container 705b0f628da5b99f8538cd3f3be736edeefc35ba6e6625bb889e6d553f432b90. Jan 17 12:00:36.956323 systemd[1]: Started cri-containerd-e4598fbde8de5c124b373596264d57956005c0d6402e088976848182d81aeef7.scope - libcontainer container e4598fbde8de5c124b373596264d57956005c0d6402e088976848182d81aeef7. Jan 17 12:00:36.965691 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:36.972838 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:00:36.989075 containerd[1439]: time="2025-01-17T12:00:36.989034663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r2wlx,Uid:cb05d32a-c1b9-46a8-ad6b-f1e60e92fbb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"705b0f628da5b99f8538cd3f3be736edeefc35ba6e6625bb889e6d553f432b90\"" Jan 17 12:00:36.990845 kubelet[2532]: E0117 12:00:36.990002 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:36.992863 containerd[1439]: time="2025-01-17T12:00:36.992822346Z" level=info msg="CreateContainer within sandbox \"705b0f628da5b99f8538cd3f3be736edeefc35ba6e6625bb889e6d553f432b90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:00:36.998195 containerd[1439]: time="2025-01-17T12:00:36.998150950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-khzd7,Uid:da11b486-91ca-48fa-8f8c-164c5df647f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4598fbde8de5c124b373596264d57956005c0d6402e088976848182d81aeef7\"" Jan 17 12:00:36.999865 kubelet[2532]: E0117 12:00:36.999833 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:37.002413 containerd[1439]: time="2025-01-17T12:00:37.002298273Z" level=info msg="CreateContainer within sandbox \"e4598fbde8de5c124b373596264d57956005c0d6402e088976848182d81aeef7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:00:37.051712 containerd[1439]: time="2025-01-17T12:00:37.051630827Z" level=info msg="CreateContainer within sandbox \"705b0f628da5b99f8538cd3f3be736edeefc35ba6e6625bb889e6d553f432b90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c127462764bb696852c4d2f5d05f31a04b341d897e5ae055abaae438e9993247\"" Jan 17 12:00:37.052784 containerd[1439]: time="2025-01-17T12:00:37.052495667Z" level=info msg="StartContainer for \"c127462764bb696852c4d2f5d05f31a04b341d897e5ae055abaae438e9993247\"" Jan 17 12:00:37.052784 containerd[1439]: time="2025-01-17T12:00:37.052533347Z" level=info msg="CreateContainer within sandbox \"e4598fbde8de5c124b373596264d57956005c0d6402e088976848182d81aeef7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70f3f722592d421a81e0aeb3756bf830a525f7dc313fa1c2372ca5f020651771\"" Jan 17 12:00:37.053912 containerd[1439]: time="2025-01-17T12:00:37.053527588Z" level=info msg="StartContainer for \"70f3f722592d421a81e0aeb3756bf830a525f7dc313fa1c2372ca5f020651771\"" Jan 17 12:00:37.076097 systemd[1]: Started cri-containerd-c127462764bb696852c4d2f5d05f31a04b341d897e5ae055abaae438e9993247.scope - libcontainer container c127462764bb696852c4d2f5d05f31a04b341d897e5ae055abaae438e9993247. Jan 17 12:00:37.079920 systemd[1]: Started cri-containerd-70f3f722592d421a81e0aeb3756bf830a525f7dc313fa1c2372ca5f020651771.scope - libcontainer container 70f3f722592d421a81e0aeb3756bf830a525f7dc313fa1c2372ca5f020651771. Jan 17 12:00:37.102424 containerd[1439]: time="2025-01-17T12:00:37.102381501Z" level=info msg="StartContainer for \"c127462764bb696852c4d2f5d05f31a04b341d897e5ae055abaae438e9993247\" returns successfully" Jan 17 12:00:37.111503 containerd[1439]: time="2025-01-17T12:00:37.111396828Z" level=info msg="StartContainer for \"70f3f722592d421a81e0aeb3756bf830a525f7dc313fa1c2372ca5f020651771\" returns successfully" Jan 17 12:00:37.688001 kubelet[2532]: E0117 12:00:37.687712 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:37.690987 kubelet[2532]: E0117 12:00:37.690868 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:37.703438 kubelet[2532]: I0117 12:00:37.702547 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r2wlx" podStartSLOduration=32.702509993 podStartE2EDuration="32.702509993s" podCreationTimestamp="2025-01-17 12:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:00:37.702366832 +0000 UTC m=+47.236637553" watchObservedRunningTime="2025-01-17 12:00:37.702509993 +0000 UTC m=+47.236780714" Jan 17 12:00:37.931838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176539765.mount: Deactivated successfully. Jan 17 12:00:38.650701 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:51490.service - OpenSSH per-connection server daemon (10.0.0.1:51490). Jan 17 12:00:38.690106 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 51490 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:38.692501 kubelet[2532]: E0117 12:00:38.692207 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:38.692501 kubelet[2532]: E0117 12:00:38.692354 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:38.692357 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:38.696614 systemd-logind[1426]: New session 11 of user core. Jan 17 12:00:38.708073 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:00:38.816700 sshd[3990]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:38.820073 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:51490.service: Deactivated successfully. Jan 17 12:00:38.821830 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:00:38.822543 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:00:38.823695 systemd-logind[1426]: Removed session 11. Jan 17 12:00:39.694341 kubelet[2532]: E0117 12:00:39.694296 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:00:43.831705 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:43856.service - OpenSSH per-connection server daemon (10.0.0.1:43856). Jan 17 12:00:43.886794 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 43856 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:43.888454 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:43.892951 systemd-logind[1426]: New session 12 of user core. Jan 17 12:00:43.898060 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:00:44.010200 sshd[4006]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:44.016408 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:43856.service: Deactivated successfully. Jan 17 12:00:44.019383 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:00:44.020710 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:00:44.029278 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:43866.service - OpenSSH per-connection server daemon (10.0.0.1:43866). Jan 17 12:00:44.030192 systemd-logind[1426]: Removed session 12. Jan 17 12:00:44.066921 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 43866 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:44.068513 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:44.072416 systemd-logind[1426]: New session 13 of user core. Jan 17 12:00:44.083063 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:00:44.239301 sshd[4021]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:44.258201 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:43866.service: Deactivated successfully. Jan 17 12:00:44.260417 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:00:44.263726 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:00:44.274360 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). Jan 17 12:00:44.275922 systemd-logind[1426]: Removed session 13. Jan 17 12:00:44.310919 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:44.312607 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:44.317497 systemd-logind[1426]: New session 14 of user core. Jan 17 12:00:44.327105 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:00:44.439858 sshd[4034]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:44.443730 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:43874.service: Deactivated successfully. Jan 17 12:00:44.445453 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:00:44.446108 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:00:44.446915 systemd-logind[1426]: Removed session 14. Jan 17 12:00:49.453594 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:43884.service - OpenSSH per-connection server daemon (10.0.0.1:43884). Jan 17 12:00:49.490698 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 43884 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:49.491940 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:49.495353 systemd-logind[1426]: New session 15 of user core. Jan 17 12:00:49.501035 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:00:49.611036 sshd[4048]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:49.614290 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:43884.service: Deactivated successfully. Jan 17 12:00:49.616750 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:00:49.617481 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:00:49.618270 systemd-logind[1426]: Removed session 15. Jan 17 12:00:54.621714 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:45638.service - OpenSSH per-connection server daemon (10.0.0.1:45638). Jan 17 12:00:54.658986 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 45638 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:54.660192 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:54.663950 systemd-logind[1426]: New session 16 of user core. Jan 17 12:00:54.679051 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:00:54.785221 sshd[4064]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:54.792604 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:45638.service: Deactivated successfully. Jan 17 12:00:54.794208 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:00:54.795614 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:00:54.803154 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:45646.service - OpenSSH per-connection server daemon (10.0.0.1:45646). Jan 17 12:00:54.804059 systemd-logind[1426]: Removed session 16. Jan 17 12:00:54.836982 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 45646 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:54.838441 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:54.842824 systemd-logind[1426]: New session 17 of user core. Jan 17 12:00:54.853062 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:00:55.025694 sshd[4078]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:55.036421 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:45646.service: Deactivated successfully. Jan 17 12:00:55.038073 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:00:55.039285 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:00:55.040635 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:45658.service - OpenSSH per-connection server daemon (10.0.0.1:45658). Jan 17 12:00:55.041459 systemd-logind[1426]: Removed session 17. Jan 17 12:00:55.081930 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 45658 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:55.083442 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:55.087606 systemd-logind[1426]: New session 18 of user core. Jan 17 12:00:55.099029 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:00:56.356717 sshd[4090]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:56.367641 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:45658.service: Deactivated successfully. Jan 17 12:00:56.369263 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:00:56.371380 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:00:56.380275 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:45668.service - OpenSSH per-connection server daemon (10.0.0.1:45668). Jan 17 12:00:56.382294 systemd-logind[1426]: Removed session 18. Jan 17 12:00:56.414843 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 45668 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:56.416156 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:56.420074 systemd-logind[1426]: New session 19 of user core. Jan 17 12:00:56.430102 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:00:56.655199 sshd[4112]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:56.665325 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:45668.service: Deactivated successfully. Jan 17 12:00:56.667099 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:00:56.668794 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:00:56.670219 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:45682.service - OpenSSH per-connection server daemon (10.0.0.1:45682). Jan 17 12:00:56.673288 systemd-logind[1426]: Removed session 19. Jan 17 12:00:56.708085 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 45682 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:00:56.709379 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:56.713328 systemd-logind[1426]: New session 20 of user core. Jan 17 12:00:56.723140 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:00:56.829460 sshd[4124]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:56.833239 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:45682.service: Deactivated successfully. Jan 17 12:00:56.835519 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:00:56.836124 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:00:56.836947 systemd-logind[1426]: Removed session 20. Jan 17 12:01:01.562013 kubelet[2532]: E0117 12:01:01.561973 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:01.840495 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:45698.service - OpenSSH per-connection server daemon (10.0.0.1:45698). Jan 17 12:01:01.878757 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 45698 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:01.880209 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:01.884280 systemd-logind[1426]: New session 21 of user core. Jan 17 12:01:01.903087 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:01:02.008341 sshd[4141]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:02.011583 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:45698.service: Deactivated successfully. Jan 17 12:01:02.014067 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:01:02.014697 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:01:02.015632 systemd-logind[1426]: Removed session 21. Jan 17 12:01:03.562334 kubelet[2532]: E0117 12:01:03.562292 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:07.019208 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:57084.service - OpenSSH per-connection server daemon (10.0.0.1:57084). Jan 17 12:01:07.058603 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 57084 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:07.059873 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:07.063995 systemd-logind[1426]: New session 22 of user core. Jan 17 12:01:07.074049 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:01:07.196395 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:07.199557 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:57084.service: Deactivated successfully. Jan 17 12:01:07.201195 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:01:07.202656 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:01:07.203463 systemd-logind[1426]: Removed session 22. Jan 17 12:01:12.212531 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:57100.service - OpenSSH per-connection server daemon (10.0.0.1:57100). Jan 17 12:01:12.249607 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 57100 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:12.250964 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:12.254918 systemd-logind[1426]: New session 23 of user core. Jan 17 12:01:12.267128 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:01:12.374305 sshd[4173]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:12.381418 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:57100.service: Deactivated successfully. Jan 17 12:01:12.383025 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:01:12.384423 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:01:12.391271 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:57110.service - OpenSSH per-connection server daemon (10.0.0.1:57110). Jan 17 12:01:12.393121 systemd-logind[1426]: Removed session 23. Jan 17 12:01:12.424967 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 57110 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:12.426319 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:12.429996 systemd-logind[1426]: New session 24 of user core. Jan 17 12:01:12.440096 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:01:14.503990 kubelet[2532]: I0117 12:01:14.503088 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-khzd7" podStartSLOduration=69.503048972 podStartE2EDuration="1m9.503048972s" podCreationTimestamp="2025-01-17 12:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:00:37.726310729 +0000 UTC m=+47.260581450" watchObservedRunningTime="2025-01-17 12:01:14.503048972 +0000 UTC m=+84.037319693" Jan 17 12:01:14.521125 containerd[1439]: time="2025-01-17T12:01:14.521060526Z" level=info msg="StopContainer for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" with timeout 30 (s)" Jan 17 12:01:14.521579 containerd[1439]: time="2025-01-17T12:01:14.521555173Z" level=info msg="Stop container \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" with signal terminated" Jan 17 12:01:14.532473 systemd[1]: cri-containerd-fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11.scope: Deactivated successfully. Jan 17 12:01:14.544242 containerd[1439]: time="2025-01-17T12:01:14.544195238Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:01:14.550666 containerd[1439]: time="2025-01-17T12:01:14.550628255Z" level=info msg="StopContainer for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" with timeout 2 (s)" Jan 17 12:01:14.551066 containerd[1439]: time="2025-01-17T12:01:14.551032662Z" level=info msg="Stop container \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" with signal terminated" Jan 17 12:01:14.558382 systemd-networkd[1390]: lxc_health: Link DOWN Jan 17 12:01:14.558388 systemd-networkd[1390]: lxc_health: Lost carrier Jan 17 12:01:14.570312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11-rootfs.mount: Deactivated successfully. Jan 17 12:01:14.577819 containerd[1439]: time="2025-01-17T12:01:14.577768308Z" level=info msg="shim disconnected" id=fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11 namespace=k8s.io Jan 17 12:01:14.577819 containerd[1439]: time="2025-01-17T12:01:14.577815589Z" level=warning msg="cleaning up after shim disconnected" id=fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11 namespace=k8s.io Jan 17 12:01:14.578100 containerd[1439]: time="2025-01-17T12:01:14.577829829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:14.579588 systemd[1]: cri-containerd-1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953.scope: Deactivated successfully. Jan 17 12:01:14.579826 systemd[1]: cri-containerd-1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953.scope: Consumed 6.424s CPU time. Jan 17 12:01:14.602134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953-rootfs.mount: Deactivated successfully. Jan 17 12:01:14.615485 containerd[1439]: time="2025-01-17T12:01:14.615424481Z" level=info msg="shim disconnected" id=1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953 namespace=k8s.io Jan 17 12:01:14.615485 containerd[1439]: time="2025-01-17T12:01:14.615476961Z" level=warning msg="cleaning up after shim disconnected" id=1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953 namespace=k8s.io Jan 17 12:01:14.615485 containerd[1439]: time="2025-01-17T12:01:14.615486602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:14.625652 containerd[1439]: time="2025-01-17T12:01:14.625597035Z" level=info msg="StopContainer for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" returns successfully" Jan 17 12:01:14.626547 containerd[1439]: time="2025-01-17T12:01:14.626417088Z" level=info msg="StopPodSandbox for \"159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da\"" Jan 17 12:01:14.626547 containerd[1439]: time="2025-01-17T12:01:14.626455568Z" level=info msg="Container to stop \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:01:14.628651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da-shm.mount: Deactivated successfully. Jan 17 12:01:14.631155 containerd[1439]: time="2025-01-17T12:01:14.631119719Z" level=info msg="StopContainer for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" returns successfully" Jan 17 12:01:14.631560 containerd[1439]: time="2025-01-17T12:01:14.631535406Z" level=info msg="StopPodSandbox for \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\"" Jan 17 12:01:14.631617 containerd[1439]: time="2025-01-17T12:01:14.631567646Z" level=info msg="Container to stop \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:01:14.631617 containerd[1439]: time="2025-01-17T12:01:14.631579646Z" level=info msg="Container to stop \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:01:14.633367 systemd[1]: cri-containerd-159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da.scope: Deactivated successfully. Jan 17 12:01:14.634607 containerd[1439]: time="2025-01-17T12:01:14.631589206Z" level=info msg="Container to stop \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:01:14.634701 containerd[1439]: time="2025-01-17T12:01:14.634611012Z" level=info msg="Container to stop \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:01:14.634701 containerd[1439]: time="2025-01-17T12:01:14.634629573Z" level=info msg="Container to stop \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:01:14.636185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689-shm.mount: Deactivated successfully. Jan 17 12:01:14.647387 systemd[1]: cri-containerd-3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689.scope: Deactivated successfully. Jan 17 12:01:14.661020 containerd[1439]: time="2025-01-17T12:01:14.660909692Z" level=info msg="shim disconnected" id=159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da namespace=k8s.io Jan 17 12:01:14.661020 containerd[1439]: time="2025-01-17T12:01:14.660972013Z" level=warning msg="cleaning up after shim disconnected" id=159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da namespace=k8s.io Jan 17 12:01:14.661020 containerd[1439]: time="2025-01-17T12:01:14.660982213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:14.667411 containerd[1439]: time="2025-01-17T12:01:14.667272869Z" level=info msg="shim disconnected" id=3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689 namespace=k8s.io Jan 17 12:01:14.667411 containerd[1439]: time="2025-01-17T12:01:14.667325830Z" level=warning msg="cleaning up after shim disconnected" id=3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689 namespace=k8s.io Jan 17 12:01:14.667411 containerd[1439]: time="2025-01-17T12:01:14.667335070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:14.676011 containerd[1439]: time="2025-01-17T12:01:14.675878920Z" level=info msg="TearDown network for sandbox \"159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da\" successfully" Jan 17 12:01:14.676011 containerd[1439]: time="2025-01-17T12:01:14.675991722Z" level=info msg="StopPodSandbox for \"159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da\" returns successfully" Jan 17 12:01:14.682059 containerd[1439]: time="2025-01-17T12:01:14.682013573Z" level=info msg="TearDown network for sandbox \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" successfully" Jan 17 12:01:14.682059 containerd[1439]: time="2025-01-17T12:01:14.682045054Z" level=info msg="StopPodSandbox for \"3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689\" returns successfully" Jan 17 12:01:14.775997 kubelet[2532]: I0117 12:01:14.775852 2532 scope.go:117] "RemoveContainer" containerID="fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11" Jan 17 12:01:14.777743 containerd[1439]: time="2025-01-17T12:01:14.777711188Z" level=info msg="RemoveContainer for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\"" Jan 17 12:01:14.780194 containerd[1439]: time="2025-01-17T12:01:14.780168906Z" level=info msg="RemoveContainer for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" returns successfully" Jan 17 12:01:14.780465 kubelet[2532]: I0117 12:01:14.780443 2532 scope.go:117] "RemoveContainer" containerID="fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11" Jan 17 12:01:14.780690 containerd[1439]: time="2025-01-17T12:01:14.780656313Z" level=error msg="ContainerStatus for \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\": not found" Jan 17 12:01:14.782532 kubelet[2532]: E0117 12:01:14.782504 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\": not found" containerID="fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11" Jan 17 12:01:14.787669 kubelet[2532]: I0117 12:01:14.787627 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11"} err="failed to get container status \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbfe1516cacfa2b43903d054e9e45c39ec634ca5ee3c8c32f47d448374782e11\": not found" Jan 17 12:01:14.787733 kubelet[2532]: I0117 12:01:14.787674 2532 scope.go:117] "RemoveContainer" containerID="1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953" Jan 17 12:01:14.788941 containerd[1439]: time="2025-01-17T12:01:14.788905198Z" level=info msg="RemoveContainer for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\"" Jan 17 12:01:14.795791 containerd[1439]: time="2025-01-17T12:01:14.795752663Z" level=info msg="RemoveContainer for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" returns successfully" Jan 17 12:01:14.796001 kubelet[2532]: I0117 12:01:14.795966 2532 scope.go:117] "RemoveContainer" containerID="8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9" Jan 17 12:01:14.796933 containerd[1439]: time="2025-01-17T12:01:14.796894880Z" level=info msg="RemoveContainer for \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\"" Jan 17 12:01:14.799306 containerd[1439]: time="2025-01-17T12:01:14.799266516Z" level=info msg="RemoveContainer for \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\" returns successfully" Jan 17 12:01:14.799520 kubelet[2532]: I0117 12:01:14.799496 2532 scope.go:117] "RemoveContainer" containerID="113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710" Jan 17 12:01:14.800646 containerd[1439]: time="2025-01-17T12:01:14.800619657Z" level=info msg="RemoveContainer for \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\"" Jan 17 12:01:14.802660 containerd[1439]: time="2025-01-17T12:01:14.802623007Z" level=info msg="RemoveContainer for \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\" returns successfully" Jan 17 12:01:14.802824 kubelet[2532]: I0117 12:01:14.802796 2532 scope.go:117] "RemoveContainer" containerID="2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967" Jan 17 12:01:14.803731 containerd[1439]: time="2025-01-17T12:01:14.803707703Z" level=info msg="RemoveContainer for \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\"" Jan 17 12:01:14.812973 kubelet[2532]: I0117 12:01:14.812941 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-config-path\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.813605 containerd[1439]: time="2025-01-17T12:01:14.813574213Z" level=info msg="RemoveContainer for \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\" returns successfully" Jan 17 12:01:14.813782 kubelet[2532]: I0117 12:01:14.813762 2532 scope.go:117] "RemoveContainer" containerID="85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd" Jan 17 12:01:14.814700 containerd[1439]: time="2025-01-17T12:01:14.814672670Z" level=info msg="RemoveContainer for \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\"" Jan 17 12:01:14.816752 containerd[1439]: time="2025-01-17T12:01:14.816726741Z" level=info msg="RemoveContainer for \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\" returns successfully" Jan 17 12:01:14.816991 kubelet[2532]: I0117 12:01:14.816968 2532 scope.go:117] "RemoveContainer" containerID="1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953" Jan 17 12:01:14.818061 containerd[1439]: time="2025-01-17T12:01:14.818027521Z" level=error msg="ContainerStatus for \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\": not found" Jan 17 12:01:14.818304 kubelet[2532]: E0117 12:01:14.818188 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\": not found" containerID="1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953" Jan 17 12:01:14.818304 kubelet[2532]: I0117 12:01:14.818234 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953"} err="failed to get container status \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fa3d4e7dc7f27b12dd8f4a441a746435a977f9a8427add602c62a7dd81d0953\": not found" Jan 17 12:01:14.818304 kubelet[2532]: I0117 12:01:14.818248 2532 scope.go:117] "RemoveContainer" containerID="8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9" Jan 17 12:01:14.818411 containerd[1439]: time="2025-01-17T12:01:14.818369726Z" level=error msg="ContainerStatus for \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\": not found" Jan 17 12:01:14.818519 kubelet[2532]: E0117 12:01:14.818495 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\": not found" containerID="8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9" Jan 17 12:01:14.818551 kubelet[2532]: I0117 12:01:14.818530 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9"} err="failed to get container status \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"8be79e82bf6697d9756809ba6b8a2f49c26eeeaf54a5d06601c3dc0555a43cf9\": not found" Jan 17 12:01:14.818551 kubelet[2532]: I0117 12:01:14.818541 2532 scope.go:117] "RemoveContainer" containerID="113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710" Jan 17 12:01:14.818727 containerd[1439]: time="2025-01-17T12:01:14.818697851Z" level=error msg="ContainerStatus for \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\": not found" Jan 17 12:01:14.818910 kubelet[2532]: E0117 12:01:14.818883 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\": not found" containerID="113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710" Jan 17 12:01:14.818947 kubelet[2532]: I0117 12:01:14.818926 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710"} err="failed to get container status \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\": rpc error: code = NotFound desc = an error occurred when try to find container \"113d32f0e4f415bcf080c4ec5636a6142ecb901c454dce90679766de380c4710\": not found" Jan 17 12:01:14.818947 kubelet[2532]: I0117 12:01:14.818937 2532 scope.go:117] "RemoveContainer" containerID="2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967" Jan 17 12:01:14.819119 containerd[1439]: time="2025-01-17T12:01:14.819090697Z" level=error msg="ContainerStatus for \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\": not found" Jan 17 12:01:14.819271 kubelet[2532]: E0117 12:01:14.819240 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\": not found" containerID="2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967" Jan 17 12:01:14.819326 kubelet[2532]: I0117 12:01:14.819308 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967"} err="failed to get container status \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c91f35699aa28455a52a8cecb0c0eb4f086078ba428bcb55e977ed3125af967\": not found" Jan 17 12:01:14.819326 kubelet[2532]: I0117 12:01:14.819325 2532 scope.go:117] "RemoveContainer" containerID="85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd" Jan 17 12:01:14.819611 containerd[1439]: time="2025-01-17T12:01:14.819497624Z" level=error msg="ContainerStatus for \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\": not found" Jan 17 12:01:14.820519 kubelet[2532]: E0117 12:01:14.820478 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\": not found" containerID="85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd" Jan 17 12:01:14.820519 kubelet[2532]: I0117 12:01:14.820506 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd"} err="failed to get container status \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"85f09bfdf89907db56830bda5159e5e562190d022c0bd6d4b2fd408fa6c464dd\": not found" Jan 17 12:01:14.825370 kubelet[2532]: I0117 12:01:14.825345 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-cilium-config-path\") pod \"93e65307-93ed-4bb9-9ac1-1ec2214c78ab\" (UID: \"93e65307-93ed-4bb9-9ac1-1ec2214c78ab\") " Jan 17 12:01:14.825427 kubelet[2532]: I0117 12:01:14.825385 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-xtables-lock\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825427 kubelet[2532]: I0117 12:01:14.825404 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cni-path\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825427 kubelet[2532]: I0117 12:01:14.825420 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-etc-cni-netd\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825509 kubelet[2532]: I0117 12:01:14.825440 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-clustermesh-secrets\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825509 kubelet[2532]: I0117 12:01:14.825463 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h46j\" (UniqueName: \"kubernetes.io/projected/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-kube-api-access-2h46j\") pod \"93e65307-93ed-4bb9-9ac1-1ec2214c78ab\" (UID: \"93e65307-93ed-4bb9-9ac1-1ec2214c78ab\") " Jan 17 12:01:14.825509 kubelet[2532]: I0117 12:01:14.825481 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-kernel\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825509 kubelet[2532]: I0117 12:01:14.825499 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-cgroup\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825589 kubelet[2532]: I0117 12:01:14.825518 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-bpf-maps\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825589 kubelet[2532]: I0117 12:01:14.825536 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hubble-tls\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825589 kubelet[2532]: I0117 12:01:14.825552 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hostproc\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.825589 kubelet[2532]: I0117 12:01:14.825537 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:01:14.825589 kubelet[2532]: I0117 12:01:14.825573 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxt8\" (UniqueName: \"kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-kube-api-access-8bxt8\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.826064 kubelet[2532]: I0117 12:01:14.825594 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-run\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.826064 kubelet[2532]: I0117 12:01:14.825611 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-lib-modules\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.826064 kubelet[2532]: I0117 12:01:14.825631 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-net\") pod \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\" (UID: \"d42eb1f6-ca8e-4d80-8e73-fa5046babc27\") " Jan 17 12:01:14.826064 kubelet[2532]: I0117 12:01:14.825678 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.826064 kubelet[2532]: I0117 12:01:14.825711 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826064 kubelet[2532]: I0117 12:01:14.825737 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hostproc" (OuterVolumeSpecName: "hostproc") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826198 kubelet[2532]: I0117 12:01:14.825916 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826198 kubelet[2532]: I0117 12:01:14.825995 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826198 kubelet[2532]: I0117 12:01:14.826017 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826198 kubelet[2532]: I0117 12:01:14.826080 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826198 kubelet[2532]: I0117 12:01:14.826103 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.826306 kubelet[2532]: I0117 12:01:14.826122 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cni-path" (OuterVolumeSpecName: "cni-path") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.828197 kubelet[2532]: I0117 12:01:14.827871 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93e65307-93ed-4bb9-9ac1-1ec2214c78ab" (UID: "93e65307-93ed-4bb9-9ac1-1ec2214c78ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:01:14.828197 kubelet[2532]: I0117 12:01:14.827934 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.828197 kubelet[2532]: I0117 12:01:14.827964 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:01:14.835211 kubelet[2532]: I0117 12:01:14.835108 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-kube-api-access-8bxt8" (OuterVolumeSpecName: "kube-api-access-8bxt8") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "kube-api-access-8bxt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:01:14.835211 kubelet[2532]: I0117 12:01:14.835139 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:01:14.835211 kubelet[2532]: I0117 12:01:14.835183 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d42eb1f6-ca8e-4d80-8e73-fa5046babc27" (UID: "d42eb1f6-ca8e-4d80-8e73-fa5046babc27"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:01:14.835211 kubelet[2532]: I0117 12:01:14.835183 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-kube-api-access-2h46j" (OuterVolumeSpecName: "kube-api-access-2h46j") pod "93e65307-93ed-4bb9-9ac1-1ec2214c78ab" (UID: "93e65307-93ed-4bb9-9ac1-1ec2214c78ab"). InnerVolumeSpecName "kube-api-access-2h46j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926585 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926627 2532 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926643 2532 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926653 2532 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926663 2532 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926674 2532 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2h46j\" (UniqueName: \"kubernetes.io/projected/93e65307-93ed-4bb9-9ac1-1ec2214c78ab-kube-api-access-2h46j\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926684 2532 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.926833 kubelet[2532]: I0117 12:01:14.926693 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926701 2532 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926710 2532 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926719 2532 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926729 2532 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8bxt8\" (UniqueName: \"kubernetes.io/projected/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-kube-api-access-8bxt8\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926738 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926746 2532 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:14.927148 kubelet[2532]: I0117 12:01:14.926755 2532 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d42eb1f6-ca8e-4d80-8e73-fa5046babc27-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 12:01:15.068223 systemd[1]: Removed slice kubepods-besteffort-pod93e65307_93ed_4bb9_9ac1_1ec2214c78ab.slice - libcontainer container kubepods-besteffort-pod93e65307_93ed_4bb9_9ac1_1ec2214c78ab.slice. Jan 17 12:01:15.074313 systemd[1]: Removed slice kubepods-burstable-podd42eb1f6_ca8e_4d80_8e73_fa5046babc27.slice - libcontainer container kubepods-burstable-podd42eb1f6_ca8e_4d80_8e73_fa5046babc27.slice. Jan 17 12:01:15.074398 systemd[1]: kubepods-burstable-podd42eb1f6_ca8e_4d80_8e73_fa5046babc27.slice: Consumed 6.589s CPU time. Jan 17 12:01:15.531355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3279f96184d9e2a65a3d6a309b458d734b62e9bccc335e7c68c1cf2e150c0689-rootfs.mount: Deactivated successfully. Jan 17 12:01:15.531675 systemd[1]: var-lib-kubelet-pods-d42eb1f6\x2dca8e\x2d4d80\x2d8e73\x2dfa5046babc27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8bxt8.mount: Deactivated successfully. Jan 17 12:01:15.531757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-159482303a2a5227ac48a5bf5cc84ae8d546cb728e96d6cce1b68eb1812577da-rootfs.mount: Deactivated successfully. Jan 17 12:01:15.531818 systemd[1]: var-lib-kubelet-pods-93e65307\x2d93ed\x2d4bb9\x2d9ac1\x2d1ec2214c78ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2h46j.mount: Deactivated successfully. Jan 17 12:01:15.531871 systemd[1]: var-lib-kubelet-pods-d42eb1f6\x2dca8e\x2d4d80\x2d8e73\x2dfa5046babc27-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:01:15.531937 systemd[1]: var-lib-kubelet-pods-d42eb1f6\x2dca8e\x2d4d80\x2d8e73\x2dfa5046babc27-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:01:15.561467 kubelet[2532]: E0117 12:01:15.561437 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:15.631079 kubelet[2532]: E0117 12:01:15.631027 2532 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:01:16.474305 sshd[4187]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:16.485534 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:57110.service: Deactivated successfully. Jan 17 12:01:16.487149 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:01:16.487299 systemd[1]: session-24.scope: Consumed 1.417s CPU time. Jan 17 12:01:16.488708 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:01:16.495643 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:48266.service - OpenSSH per-connection server daemon (10.0.0.1:48266). Jan 17 12:01:16.496944 systemd-logind[1426]: Removed session 24. Jan 17 12:01:16.528814 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 48266 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:16.530035 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:16.533959 systemd-logind[1426]: New session 25 of user core. Jan 17 12:01:16.548060 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:01:16.564851 kubelet[2532]: I0117 12:01:16.564707 2532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="93e65307-93ed-4bb9-9ac1-1ec2214c78ab" path="/var/lib/kubelet/pods/93e65307-93ed-4bb9-9ac1-1ec2214c78ab/volumes" Jan 17 12:01:16.565214 kubelet[2532]: I0117 12:01:16.565150 2532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" path="/var/lib/kubelet/pods/d42eb1f6-ca8e-4d80-8e73-fa5046babc27/volumes" Jan 17 12:01:17.340171 sshd[4352]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:17.349870 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:48266.service: Deactivated successfully. Jan 17 12:01:17.354622 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:01:17.357033 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:01:17.363615 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:48274.service - OpenSSH per-connection server daemon (10.0.0.1:48274). Jan 17 12:01:17.366669 systemd-logind[1426]: Removed session 25. Jan 17 12:01:17.369953 kubelet[2532]: I0117 12:01:17.368161 2532 topology_manager.go:215] "Topology Admit Handler" podUID="9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58" podNamespace="kube-system" podName="cilium-w4ghg" Jan 17 12:01:17.369953 kubelet[2532]: E0117 12:01:17.368216 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" containerName="mount-cgroup" Jan 17 12:01:17.369953 kubelet[2532]: E0117 12:01:17.368226 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" containerName="apply-sysctl-overwrites" Jan 17 12:01:17.369953 kubelet[2532]: E0117 12:01:17.368233 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" containerName="mount-bpf-fs" Jan 17 12:01:17.369953 kubelet[2532]: E0117 12:01:17.368240 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" containerName="clean-cilium-state" Jan 17 12:01:17.369953 kubelet[2532]: E0117 12:01:17.368248 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93e65307-93ed-4bb9-9ac1-1ec2214c78ab" containerName="cilium-operator" Jan 17 12:01:17.369953 kubelet[2532]: E0117 12:01:17.368254 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" containerName="cilium-agent" Jan 17 12:01:17.374987 kubelet[2532]: I0117 12:01:17.374845 2532 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e65307-93ed-4bb9-9ac1-1ec2214c78ab" containerName="cilium-operator" Jan 17 12:01:17.374987 kubelet[2532]: I0117 12:01:17.374902 2532 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42eb1f6-ca8e-4d80-8e73-fa5046babc27" containerName="cilium-agent" Jan 17 12:01:17.388293 systemd[1]: Created slice kubepods-burstable-pod9be8e0b5_ae6d_41fb_9cc7_748c40ce2f58.slice - libcontainer container kubepods-burstable-pod9be8e0b5_ae6d_41fb_9cc7_748c40ce2f58.slice. Jan 17 12:01:17.402214 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 48274 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:17.404110 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:17.409903 systemd-logind[1426]: New session 26 of user core. Jan 17 12:01:17.415090 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:01:17.440839 kubelet[2532]: I0117 12:01:17.440740 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-cni-path\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.440839 kubelet[2532]: I0117 12:01:17.440792 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-clustermesh-secrets\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.440839 kubelet[2532]: I0117 12:01:17.440813 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-cilium-ipsec-secrets\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.440977 kubelet[2532]: I0117 12:01:17.440953 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-cilium-run\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441017 kubelet[2532]: I0117 12:01:17.440999 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-bpf-maps\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441104 kubelet[2532]: I0117 12:01:17.441064 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-xtables-lock\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441134 kubelet[2532]: I0117 12:01:17.441106 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6drg\" (UniqueName: \"kubernetes.io/projected/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-kube-api-access-s6drg\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441134 kubelet[2532]: I0117 12:01:17.441129 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-hostproc\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441173 kubelet[2532]: I0117 12:01:17.441147 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-etc-cni-netd\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441173 kubelet[2532]: I0117 12:01:17.441168 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-lib-modules\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441223 kubelet[2532]: I0117 12:01:17.441206 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-cilium-config-path\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441245 kubelet[2532]: I0117 12:01:17.441238 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-hubble-tls\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441266 kubelet[2532]: I0117 12:01:17.441261 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-host-proc-sys-net\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441306 kubelet[2532]: I0117 12:01:17.441290 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-host-proc-sys-kernel\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.441330 kubelet[2532]: I0117 12:01:17.441317 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58-cilium-cgroup\") pod \"cilium-w4ghg\" (UID: \"9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58\") " pod="kube-system/cilium-w4ghg" Jan 17 12:01:17.463438 sshd[4365]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:17.472506 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:48274.service: Deactivated successfully. Jan 17 12:01:17.474251 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:01:17.475583 systemd-logind[1426]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:01:17.477400 systemd[1]: Started sshd@26-10.0.0.33:22-10.0.0.1:48290.service - OpenSSH per-connection server daemon (10.0.0.1:48290). Jan 17 12:01:17.478530 systemd-logind[1426]: Removed session 26. Jan 17 12:01:17.514382 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 48290 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:01:17.515625 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:17.521078 systemd-logind[1426]: New session 27 of user core. Jan 17 12:01:17.533039 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:01:17.561703 kubelet[2532]: E0117 12:01:17.561620 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:17.691552 kubelet[2532]: E0117 12:01:17.691077 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:17.691844 containerd[1439]: time="2025-01-17T12:01:17.691735119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4ghg,Uid:9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:17.708103 containerd[1439]: time="2025-01-17T12:01:17.708023509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:17.708103 containerd[1439]: time="2025-01-17T12:01:17.708069989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:17.708103 containerd[1439]: time="2025-01-17T12:01:17.708080469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:17.708340 containerd[1439]: time="2025-01-17T12:01:17.708148430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:17.728063 systemd[1]: Started cri-containerd-4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280.scope - libcontainer container 4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280. Jan 17 12:01:17.746196 containerd[1439]: time="2025-01-17T12:01:17.746102844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4ghg,Uid:9be8e0b5-ae6d-41fb-9cc7-748c40ce2f58,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\"" Jan 17 12:01:17.747379 kubelet[2532]: E0117 12:01:17.746845 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:17.749529 containerd[1439]: time="2025-01-17T12:01:17.749474052Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:01:17.760082 containerd[1439]: time="2025-01-17T12:01:17.760020240Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6\"" Jan 17 12:01:17.760494 containerd[1439]: time="2025-01-17T12:01:17.760454926Z" level=info msg="StartContainer for \"f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6\"" Jan 17 12:01:17.781107 systemd[1]: Started cri-containerd-f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6.scope - libcontainer container f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6. Jan 17 12:01:17.800858 containerd[1439]: time="2025-01-17T12:01:17.800811094Z" level=info msg="StartContainer for \"f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6\" returns successfully" Jan 17 12:01:17.809234 systemd[1]: cri-containerd-f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6.scope: Deactivated successfully. Jan 17 12:01:17.841088 containerd[1439]: time="2025-01-17T12:01:17.841015940Z" level=info msg="shim disconnected" id=f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6 namespace=k8s.io Jan 17 12:01:17.841088 containerd[1439]: time="2025-01-17T12:01:17.841072461Z" level=warning msg="cleaning up after shim disconnected" id=f1f0d12147ea7664dc8fd2c931a60bec2217b4b8fdbe0bf97fdfd4241171fef6 namespace=k8s.io Jan 17 12:01:17.841088 containerd[1439]: time="2025-01-17T12:01:17.841081341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:18.783656 kubelet[2532]: E0117 12:01:18.783492 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:18.786465 containerd[1439]: time="2025-01-17T12:01:18.786408248Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:01:18.796833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268681665.mount: Deactivated successfully. Jan 17 12:01:18.801838 containerd[1439]: time="2025-01-17T12:01:18.801334733Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0\"" Jan 17 12:01:18.803694 containerd[1439]: time="2025-01-17T12:01:18.803663364Z" level=info msg="StartContainer for \"c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0\"" Jan 17 12:01:18.832051 systemd[1]: Started cri-containerd-c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0.scope - libcontainer container c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0. Jan 17 12:01:18.850947 containerd[1439]: time="2025-01-17T12:01:18.850906133Z" level=info msg="StartContainer for \"c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0\" returns successfully" Jan 17 12:01:18.859345 systemd[1]: cri-containerd-c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0.scope: Deactivated successfully. Jan 17 12:01:18.878533 containerd[1439]: time="2025-01-17T12:01:18.878462231Z" level=info msg="shim disconnected" id=c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0 namespace=k8s.io Jan 17 12:01:18.878533 containerd[1439]: time="2025-01-17T12:01:18.878516271Z" level=warning msg="cleaning up after shim disconnected" id=c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0 namespace=k8s.io Jan 17 12:01:18.878533 containerd[1439]: time="2025-01-17T12:01:18.878524352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:19.546582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c74dadb242c6eb625ccc0d0e0c6e65f2d7042008ea6256a8b90d97c554bcd1b0-rootfs.mount: Deactivated successfully. Jan 17 12:01:19.787024 kubelet[2532]: E0117 12:01:19.786987 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:19.789489 containerd[1439]: time="2025-01-17T12:01:19.789430579Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:01:19.804048 containerd[1439]: time="2025-01-17T12:01:19.803496047Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8\"" Jan 17 12:01:19.804839 containerd[1439]: time="2025-01-17T12:01:19.804800265Z" level=info msg="StartContainer for \"15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8\"" Jan 17 12:01:19.831554 systemd[1]: Started cri-containerd-15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8.scope - libcontainer container 15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8. Jan 17 12:01:19.856515 containerd[1439]: time="2025-01-17T12:01:19.856461036Z" level=info msg="StartContainer for \"15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8\" returns successfully" Jan 17 12:01:19.856712 systemd[1]: cri-containerd-15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8.scope: Deactivated successfully. Jan 17 12:01:19.885961 containerd[1439]: time="2025-01-17T12:01:19.885902070Z" level=info msg="shim disconnected" id=15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8 namespace=k8s.io Jan 17 12:01:19.886428 containerd[1439]: time="2025-01-17T12:01:19.886261275Z" level=warning msg="cleaning up after shim disconnected" id=15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8 namespace=k8s.io Jan 17 12:01:19.886428 containerd[1439]: time="2025-01-17T12:01:19.886282395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:20.546684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15d5ef49c7163ecd88cc3c43d72bab3ce52914c6f00ef7c7a292b4773d94fee8-rootfs.mount: Deactivated successfully. Jan 17 12:01:20.631936 kubelet[2532]: E0117 12:01:20.631877 2532 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:01:20.790386 kubelet[2532]: E0117 12:01:20.790328 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:20.792106 containerd[1439]: time="2025-01-17T12:01:20.792075049Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:01:20.808683 containerd[1439]: time="2025-01-17T12:01:20.808570865Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6\"" Jan 17 12:01:20.809423 containerd[1439]: time="2025-01-17T12:01:20.809282274Z" level=info msg="StartContainer for \"981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6\"" Jan 17 12:01:20.838099 systemd[1]: Started cri-containerd-981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6.scope - libcontainer container 981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6. Jan 17 12:01:20.858111 systemd[1]: cri-containerd-981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6.scope: Deactivated successfully. Jan 17 12:01:20.861439 containerd[1439]: time="2025-01-17T12:01:20.861405514Z" level=info msg="StartContainer for \"981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6\" returns successfully" Jan 17 12:01:20.880395 containerd[1439]: time="2025-01-17T12:01:20.880296360Z" level=info msg="shim disconnected" id=981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6 namespace=k8s.io Jan 17 12:01:20.880395 containerd[1439]: time="2025-01-17T12:01:20.880345241Z" level=warning msg="cleaning up after shim disconnected" id=981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6 namespace=k8s.io Jan 17 12:01:20.880395 containerd[1439]: time="2025-01-17T12:01:20.880353481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:01:21.546842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-981985be182f481d883b37fe460786dc1a569367d596aed2fb07db04e5d3d8e6-rootfs.mount: Deactivated successfully. Jan 17 12:01:21.804132 kubelet[2532]: E0117 12:01:21.803496 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:21.807231 containerd[1439]: time="2025-01-17T12:01:21.807181313Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:01:21.820346 containerd[1439]: time="2025-01-17T12:01:21.820240319Z" level=info msg="CreateContainer within sandbox \"4d8dc723579844457f0cb45a5b04a3157e15a9c93a06bd8cde5db9ede5f21280\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed89d92347d3f9bf281fe7b76a02085e8e8cd2ed9d3b9b280d3798f8977923c6\"" Jan 17 12:01:21.821375 containerd[1439]: time="2025-01-17T12:01:21.821326013Z" level=info msg="StartContainer for \"ed89d92347d3f9bf281fe7b76a02085e8e8cd2ed9d3b9b280d3798f8977923c6\"" Jan 17 12:01:21.850069 systemd[1]: Started cri-containerd-ed89d92347d3f9bf281fe7b76a02085e8e8cd2ed9d3b9b280d3798f8977923c6.scope - libcontainer container ed89d92347d3f9bf281fe7b76a02085e8e8cd2ed9d3b9b280d3798f8977923c6. Jan 17 12:01:21.876440 containerd[1439]: time="2025-01-17T12:01:21.876395673Z" level=info msg="StartContainer for \"ed89d92347d3f9bf281fe7b76a02085e8e8cd2ed9d3b9b280d3798f8977923c6\" returns successfully" Jan 17 12:01:21.923798 kubelet[2532]: I0117 12:01:21.923763 2532 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:01:21Z","lastTransitionTime":"2025-01-17T12:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:01:22.173128 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 12:01:22.807840 kubelet[2532]: E0117 12:01:22.807535 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:23.809743 kubelet[2532]: E0117 12:01:23.809292 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:24.811439 kubelet[2532]: E0117 12:01:24.811388 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:25.015998 systemd-networkd[1390]: lxc_health: Link UP Jan 17 12:01:25.025774 systemd-networkd[1390]: lxc_health: Gained carrier Jan 17 12:01:25.710249 kubelet[2532]: I0117 12:01:25.709978 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w4ghg" podStartSLOduration=8.709936739 podStartE2EDuration="8.709936739s" podCreationTimestamp="2025-01-17 12:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:22.821634445 +0000 UTC m=+92.355905166" watchObservedRunningTime="2025-01-17 12:01:25.709936739 +0000 UTC m=+95.244207460" Jan 17 12:01:25.814014 kubelet[2532]: E0117 12:01:25.813594 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:26.585052 systemd-networkd[1390]: lxc_health: Gained IPv6LL Jan 17 12:01:26.815313 kubelet[2532]: E0117 12:01:26.815246 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:27.816953 kubelet[2532]: E0117 12:01:27.816875 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:01:30.323500 sshd[4373]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:30.329382 systemd[1]: sshd@26-10.0.0.33:22-10.0.0.1:48290.service: Deactivated successfully. Jan 17 12:01:30.331241 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:01:30.332418 systemd-logind[1426]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:01:30.335169 systemd-logind[1426]: Removed session 27.