Jan 30 12:58:06.055423 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:58:06.055446 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 12:58:06.055456 kernel: KASLR enabled Jan 30 12:58:06.055462 kernel: efi: EFI v2.7 by EDK II Jan 30 12:58:06.055468 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 12:58:06.055474 kernel: random: crng init done Jan 30 12:58:06.055482 kernel: ACPI: Early table checksum verification disabled Jan 30 12:58:06.055488 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 12:58:06.055494 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:58:06.055502 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055509 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055515 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055521 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055528 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055535 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055543 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055550 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055557 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:58:06.055563 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:58:06.055570 kernel: NUMA: Failed to initialise from firmware Jan 30 12:58:06.055577 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:58:06.055583 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 30 12:58:06.055590 kernel: Zone ranges: Jan 30 12:58:06.055596 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:58:06.055603 kernel: DMA32 empty Jan 30 12:58:06.055611 kernel: Normal empty Jan 30 12:58:06.055617 kernel: Movable zone start for each node Jan 30 12:58:06.055624 kernel: Early memory node ranges Jan 30 12:58:06.055630 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 12:58:06.055637 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:58:06.055643 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:58:06.055650 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:58:06.055656 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:58:06.055663 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:58:06.055669 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:58:06.055676 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:58:06.055682 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:58:06.055690 kernel: psci: probing for conduit method from ACPI. Jan 30 12:58:06.055697 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:58:06.055703 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:58:06.055713 kernel: psci: Trusted OS migration not required Jan 30 12:58:06.055720 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:58:06.055727 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:58:06.055735 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:58:06.055742 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:58:06.055771 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:58:06.055779 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:58:06.055786 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:58:06.055793 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:58:06.055800 kernel: CPU features: detected: Spectre-v4 Jan 30 12:58:06.055807 kernel: CPU features: detected: Spectre-BHB Jan 30 12:58:06.055814 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:58:06.055821 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:58:06.055829 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:58:06.055836 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:58:06.055843 kernel: alternatives: applying boot alternatives Jan 30 12:58:06.055851 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:58:06.055858 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:58:06.055865 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:58:06.055872 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:58:06.055879 kernel: Fallback order for Node 0: 0 Jan 30 12:58:06.055886 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:58:06.055893 kernel: Policy zone: DMA Jan 30 12:58:06.055900 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:58:06.055908 kernel: software IO TLB: area num 4. Jan 30 12:58:06.055915 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:58:06.055923 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Jan 30 12:58:06.055930 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:58:06.055937 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:58:06.055945 kernel: rcu: RCU event tracing is enabled. Jan 30 12:58:06.055952 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:58:06.055959 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:58:06.055966 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:58:06.055973 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:58:06.055980 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:58:06.055987 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:58:06.055996 kernel: GICv3: 256 SPIs implemented Jan 30 12:58:06.056003 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:58:06.056010 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:58:06.056017 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:58:06.056024 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:58:06.056031 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:58:06.056038 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:58:06.056045 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:58:06.056052 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:58:06.056059 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:58:06.056079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:58:06.056089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:58:06.056105 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:58:06.056113 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:58:06.056120 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:58:06.056127 kernel: arm-pv: using stolen time PV Jan 30 12:58:06.056134 kernel: Console: colour dummy device 80x25 Jan 30 12:58:06.056141 kernel: ACPI: Core revision 20230628 Jan 30 12:58:06.056149 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:58:06.056156 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:58:06.056163 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:58:06.056172 kernel: landlock: Up and running. Jan 30 12:58:06.056179 kernel: SELinux: Initializing. Jan 30 12:58:06.056186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:58:06.056193 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:58:06.056201 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:58:06.056208 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:58:06.056216 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:58:06.056223 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:58:06.056230 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:58:06.056238 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:58:06.056245 kernel: Remapping and enabling EFI services. Jan 30 12:58:06.056252 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:58:06.056259 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:58:06.056267 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:58:06.056274 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:58:06.056281 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:58:06.056288 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:58:06.056295 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:58:06.056302 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:58:06.056311 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:58:06.056318 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:58:06.056331 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:58:06.056340 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:58:06.056347 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:58:06.056355 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:58:06.056362 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:58:06.056369 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:58:06.056377 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:58:06.056386 kernel: SMP: Total of 4 processors activated. Jan 30 12:58:06.056393 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:58:06.056400 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:58:06.056408 kernel: CPU features: detected: Common not Private translations Jan 30 12:58:06.056415 kernel: CPU features: detected: CRC32 instructions Jan 30 12:58:06.056423 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:58:06.056430 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:58:06.056438 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:58:06.056447 kernel: CPU features: detected: Privileged Access Never Jan 30 12:58:06.056454 kernel: CPU features: detected: RAS Extension Support Jan 30 12:58:06.056462 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:58:06.056469 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:58:06.056476 kernel: alternatives: applying system-wide alternatives Jan 30 12:58:06.056484 kernel: devtmpfs: initialized Jan 30 12:58:06.056492 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:58:06.056499 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:58:06.056507 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:58:06.056516 kernel: SMBIOS 3.0.0 present. Jan 30 12:58:06.056523 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 12:58:06.056531 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:58:06.056538 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:58:06.056546 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:58:06.056554 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:58:06.056561 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:58:06.056569 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Jan 30 12:58:06.056576 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:58:06.056586 kernel: cpuidle: using governor menu Jan 30 12:58:06.056593 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:58:06.056601 kernel: ASID allocator initialised with 32768 entries Jan 30 12:58:06.056608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:58:06.056616 kernel: Serial: AMBA PL011 UART driver Jan 30 12:58:06.056624 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:58:06.056631 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:58:06.056639 kernel: Modules: 509040 pages in range for PLT usage Jan 30 12:58:06.056646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:58:06.056655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:58:06.056663 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:58:06.056670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:58:06.056677 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:58:06.056685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:58:06.056693 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:58:06.056700 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:58:06.056707 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:58:06.056715 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:58:06.056724 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:58:06.056731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:58:06.056738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:58:06.056746 kernel: ACPI: Interpreter enabled Jan 30 12:58:06.056753 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:58:06.056760 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:58:06.056768 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:58:06.056775 kernel: printk: console [ttyAMA0] enabled Jan 30 12:58:06.056783 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:58:06.056938 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:58:06.057014 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:58:06.057118 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:58:06.057189 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:58:06.057255 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:58:06.057265 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:58:06.057273 kernel: PCI host bridge to bus 0000:00 Jan 30 12:58:06.057352 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:58:06.057414 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:58:06.057475 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:58:06.057537 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:58:06.057628 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:58:06.057707 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:58:06.057785 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:58:06.057856 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:58:06.057926 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:58:06.057996 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:58:06.058120 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:58:06.058197 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:58:06.058262 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:58:06.058330 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:58:06.058393 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:58:06.058403 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:58:06.058411 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:58:06.058419 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:58:06.058426 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:58:06.058434 kernel: iommu: Default domain type: Translated Jan 30 12:58:06.058442 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:58:06.058450 kernel: efivars: Registered efivars operations Jan 30 12:58:06.058459 kernel: vgaarb: loaded Jan 30 12:58:06.058467 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:58:06.058474 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:58:06.058482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:58:06.058490 kernel: pnp: PnP ACPI init Jan 30 12:58:06.058567 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:58:06.058578 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:58:06.058586 kernel: NET: Registered PF_INET protocol family Jan 30 12:58:06.058596 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:58:06.058604 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:58:06.058612 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:58:06.058620 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:58:06.058627 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:58:06.058635 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:58:06.058642 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:58:06.058650 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:58:06.058658 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:58:06.058667 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:58:06.058675 kernel: kvm [1]: HYP mode not available Jan 30 12:58:06.058683 kernel: Initialise system trusted keyrings Jan 30 12:58:06.058690 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:58:06.058698 kernel: Key type asymmetric registered Jan 30 12:58:06.058706 kernel: Asymmetric key parser 'x509' registered Jan 30 12:58:06.058713 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:58:06.058721 kernel: io scheduler mq-deadline registered Jan 30 12:58:06.058729 kernel: io scheduler kyber registered Jan 30 12:58:06.058738 kernel: io scheduler bfq registered Jan 30 12:58:06.058746 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:58:06.058754 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:58:06.058762 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:58:06.058832 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:58:06.058842 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:58:06.058850 kernel: thunder_xcv, ver 1.0 Jan 30 12:58:06.058857 kernel: thunder_bgx, ver 1.0 Jan 30 12:58:06.058865 kernel: nicpf, ver 1.0 Jan 30 12:58:06.058874 kernel: nicvf, ver 1.0 Jan 30 12:58:06.058954 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:58:06.059023 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:58:05 UTC (1738241885) Jan 30 12:58:06.059034 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:58:06.059041 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:58:06.059049 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:58:06.059057 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:58:06.059096 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:58:06.059108 kernel: Segment Routing with IPv6 Jan 30 12:58:06.059120 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:58:06.059129 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:58:06.059136 kernel: Key type dns_resolver registered Jan 30 12:58:06.059144 kernel: registered taskstats version 1 Jan 30 12:58:06.059152 kernel: Loading compiled-in X.509 certificates Jan 30 12:58:06.059160 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 12:58:06.059167 kernel: Key type .fscrypt registered Jan 30 12:58:06.059175 kernel: Key type fscrypt-provisioning registered Jan 30 12:58:06.059185 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:58:06.059193 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:58:06.059200 kernel: ima: No architecture policies found Jan 30 12:58:06.059208 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:58:06.059215 kernel: clk: Disabling unused clocks Jan 30 12:58:06.059223 kernel: Freeing unused kernel memory: 39360K Jan 30 12:58:06.059230 kernel: Run /init as init process Jan 30 12:58:06.059238 kernel: with arguments: Jan 30 12:58:06.059245 kernel: /init Jan 30 12:58:06.059254 kernel: with environment: Jan 30 12:58:06.059261 kernel: HOME=/ Jan 30 12:58:06.059269 kernel: TERM=linux Jan 30 12:58:06.059276 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:58:06.059287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:58:06.059297 systemd[1]: Detected virtualization kvm. Jan 30 12:58:06.059305 systemd[1]: Detected architecture arm64. Jan 30 12:58:06.059315 systemd[1]: Running in initrd. Jan 30 12:58:06.059323 systemd[1]: No hostname configured, using default hostname. Jan 30 12:58:06.059332 systemd[1]: Hostname set to . Jan 30 12:58:06.059340 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:58:06.059348 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:58:06.059357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:58:06.059365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:58:06.059374 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:58:06.059385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:58:06.059394 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:58:06.059402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:58:06.059412 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:58:06.059421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:58:06.059430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:58:06.059438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:58:06.059448 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:58:06.059456 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:58:06.059465 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:58:06.059473 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:58:06.059481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:58:06.059490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:58:06.059498 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:58:06.059506 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:58:06.059515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:58:06.059525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:58:06.059533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:58:06.059541 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:58:06.059550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:58:06.059558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:58:06.059567 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:58:06.059575 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:58:06.059583 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:58:06.059593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:58:06.059602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:58:06.059610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:58:06.059619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:58:06.059627 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:58:06.059636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:58:06.059646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:58:06.059655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:58:06.059663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:58:06.059672 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:58:06.059681 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:58:06.059689 kernel: Bridge firewalling registered Jan 30 12:58:06.059696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:58:06.059733 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 12:58:06.059754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:58:06.059764 systemd-journald[238]: Journal started Jan 30 12:58:06.059786 systemd-journald[238]: Runtime Journal (/run/log/journal/51f9f90212d6471c990f602abd496c89) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:58:06.010853 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 12:58:06.074217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:58:06.051856 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 12:58:06.077845 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:58:06.080101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:58:06.093268 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:58:06.095157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:58:06.096585 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:58:06.109035 dracut-cmdline[273]: dracut-dracut-053 Jan 30 12:58:06.109113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:58:06.115774 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:58:06.122287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:58:06.150195 systemd-resolved[288]: Positive Trust Anchors: Jan 30 12:58:06.150217 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:58:06.150249 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:58:06.163804 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 30 12:58:06.165116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:58:06.167516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:58:06.217110 kernel: SCSI subsystem initialized Jan 30 12:58:06.228095 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:58:06.238104 kernel: iscsi: registered transport (tcp) Jan 30 12:58:06.254100 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:58:06.254121 kernel: QLogic iSCSI HBA Driver Jan 30 12:58:06.311220 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:58:06.324274 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:58:06.342634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:58:06.342700 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:58:06.348124 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:58:06.396112 kernel: raid6: neonx8 gen() 15443 MB/s Jan 30 12:58:06.413099 kernel: raid6: neonx4 gen() 14995 MB/s Jan 30 12:58:06.430102 kernel: raid6: neonx2 gen() 13229 MB/s Jan 30 12:58:06.447100 kernel: raid6: neonx1 gen() 10428 MB/s Jan 30 12:58:06.464095 kernel: raid6: int64x8 gen() 6928 MB/s Jan 30 12:58:06.481101 kernel: raid6: int64x4 gen() 7296 MB/s Jan 30 12:58:06.498098 kernel: raid6: int64x2 gen() 6128 MB/s Jan 30 12:58:06.515251 kernel: raid6: int64x1 gen() 4993 MB/s Jan 30 12:58:06.515282 kernel: raid6: using algorithm neonx8 gen() 15443 MB/s Jan 30 12:58:06.533246 kernel: raid6: .... xor() 11826 MB/s, rmw enabled Jan 30 12:58:06.533276 kernel: raid6: using neon recovery algorithm Jan 30 12:58:06.539097 kernel: xor: measuring software checksum speed Jan 30 12:58:06.539124 kernel: 8regs : 19773 MB/sec Jan 30 12:58:06.540372 kernel: 32regs : 17084 MB/sec Jan 30 12:58:06.540395 kernel: arm64_neon : 26954 MB/sec Jan 30 12:58:06.540405 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 30 12:58:06.599109 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:58:06.616402 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:58:06.630289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:58:06.642695 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 30 12:58:06.646036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:58:06.654276 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:58:06.666616 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 30 12:58:06.695920 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:58:06.705273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:58:06.751245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:58:06.757835 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:58:06.775157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:58:06.777028 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:58:06.779133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:58:06.781779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:58:06.795293 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:58:06.803708 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:58:06.816153 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:58:06.816266 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:58:06.816281 kernel: GPT:9289727 != 19775487 Jan 30 12:58:06.816291 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:58:06.816301 kernel: GPT:9289727 != 19775487 Jan 30 12:58:06.816310 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:58:06.816322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:58:06.807218 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:58:06.811259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:58:06.811378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:58:06.816112 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:58:06.817246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:58:06.817433 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:58:06.819653 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:58:06.834096 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (513) Jan 30 12:58:06.835379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:58:06.838703 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (517) Jan 30 12:58:06.848020 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:58:06.849604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:58:06.862381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:58:06.867321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:58:06.871435 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:58:06.872779 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:58:06.890264 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:58:06.892321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:58:06.897847 disk-uuid[552]: Primary Header is updated. Jan 30 12:58:06.897847 disk-uuid[552]: Secondary Entries is updated. Jan 30 12:58:06.897847 disk-uuid[552]: Secondary Header is updated. Jan 30 12:58:06.906110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:58:06.910102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:58:06.913389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:58:06.920457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:58:07.919112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:58:07.919356 disk-uuid[553]: The operation has completed successfully. Jan 30 12:58:07.957680 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:58:07.957789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:58:07.986318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:58:07.990235 sh[576]: Success Jan 30 12:58:08.020245 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:58:08.058427 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:58:08.076049 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:58:08.078334 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:58:08.091104 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 12:58:08.091160 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:58:08.091171 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:58:08.093083 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:58:08.094110 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:58:08.100102 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:58:08.101237 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:58:08.110288 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:58:08.112062 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:58:08.122178 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:58:08.122236 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:58:08.122255 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:58:08.126127 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:58:08.137559 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:58:08.139441 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:58:08.148486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:58:08.156312 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:58:08.240975 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:58:08.252318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:58:08.290055 systemd-networkd[762]: lo: Link UP Jan 30 12:58:08.290970 systemd-networkd[762]: lo: Gained carrier Jan 30 12:58:08.292500 systemd-networkd[762]: Enumeration completed Jan 30 12:58:08.292797 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:58:08.294121 systemd[1]: Reached target network.target - Network. Jan 30 12:58:08.294131 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:58:08.294135 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:58:08.301149 systemd-networkd[762]: eth0: Link UP Jan 30 12:58:08.301153 systemd-networkd[762]: eth0: Gained carrier Jan 30 12:58:08.301161 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:58:08.331534 ignition[677]: Ignition 2.19.0 Jan 30 12:58:08.331544 ignition[677]: Stage: fetch-offline Jan 30 12:58:08.331601 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:58:08.331610 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:58:08.331829 ignition[677]: parsed url from cmdline: "" Jan 30 12:58:08.331833 ignition[677]: no config URL provided Jan 30 12:58:08.331837 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:58:08.331844 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:58:08.336128 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:58:08.331867 ignition[677]: op(1): [started] loading QEMU firmware config module Jan 30 12:58:08.331872 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:58:08.352556 ignition[677]: op(1): [finished] loading QEMU firmware config module Jan 30 12:58:08.352587 ignition[677]: QEMU firmware config was not found. Ignoring... Jan 30 12:58:08.391288 ignition[677]: parsing config with SHA512: 1377165d63d905771620ea28e3eed67f1f50282acd2bb40b5e08320760e2ededd95043e42ab97bf4ee47264c04a2c44d41a4c00695f459622f2bc91d8cf07bfd Jan 30 12:58:08.399781 unknown[677]: fetched base config from "system" Jan 30 12:58:08.399792 unknown[677]: fetched user config from "qemu" Jan 30 12:58:08.400633 ignition[677]: fetch-offline: fetch-offline passed Jan 30 12:58:08.401832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:58:08.400715 ignition[677]: Ignition finished successfully Jan 30 12:58:08.404060 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:58:08.413321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:58:08.428064 ignition[773]: Ignition 2.19.0 Jan 30 12:58:08.428103 ignition[773]: Stage: kargs Jan 30 12:58:08.428288 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:58:08.428298 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:58:08.432533 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:58:08.429313 ignition[773]: kargs: kargs passed Jan 30 12:58:08.429365 ignition[773]: Ignition finished successfully Jan 30 12:58:08.443429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:58:08.455728 ignition[781]: Ignition 2.19.0 Jan 30 12:58:08.455740 ignition[781]: Stage: disks Jan 30 12:58:08.455903 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:58:08.458630 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:58:08.455912 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:58:08.460299 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:58:08.456884 ignition[781]: disks: disks passed Jan 30 12:58:08.462702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:58:08.456942 ignition[781]: Ignition finished successfully Jan 30 12:58:08.464851 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:58:08.467111 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:58:08.469430 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:58:08.490324 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:58:08.501257 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:58:08.507121 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:58:08.510293 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:58:08.570095 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 12:58:08.570575 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:58:08.572242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:58:08.584159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:58:08.586122 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:58:08.587464 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:58:08.587581 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:58:08.587606 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:58:08.596275 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jan 30 12:58:08.595889 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:58:08.600999 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:58:08.601059 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:58:08.601123 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:58:08.599294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:58:08.605108 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:58:08.608222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:58:08.683963 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:58:08.691214 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:58:08.697149 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:58:08.704905 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:58:08.810107 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:58:08.818210 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:58:08.821133 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:58:08.827098 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:58:08.845505 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:58:08.847170 ignition[913]: INFO : Ignition 2.19.0 Jan 30 12:58:08.847170 ignition[913]: INFO : Stage: mount Jan 30 12:58:08.848843 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:58:08.848843 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:58:08.848843 ignition[913]: INFO : mount: mount passed Jan 30 12:58:08.848843 ignition[913]: INFO : Ignition finished successfully Jan 30 12:58:08.850446 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:58:08.860217 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:58:09.089585 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:58:09.099286 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:58:09.106331 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 30 12:58:09.106367 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:58:09.106378 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:58:09.108190 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:58:09.111094 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:58:09.111664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:58:09.130492 ignition[944]: INFO : Ignition 2.19.0 Jan 30 12:58:09.130492 ignition[944]: INFO : Stage: files Jan 30 12:58:09.132411 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:58:09.132411 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:58:09.132411 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:58:09.136213 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:58:09.136213 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:58:09.136213 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:58:09.136213 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:58:09.136213 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:58:09.135497 unknown[944]: wrote ssh authorized keys file for user: core Jan 30 12:58:09.144672 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:58:09.144672 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 12:58:09.189485 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 12:58:09.327911 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:58:09.330167 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:58:09.330167 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 12:58:09.469269 systemd-networkd[762]: eth0: Gained IPv6LL Jan 30 12:58:09.747337 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 12:58:09.798531 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:58:09.800436 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 12:58:10.054240 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 12:58:10.257655 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:58:10.257655 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 12:58:10.261782 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:58:10.305784 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:58:10.309965 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:58:10.312243 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:58:10.312243 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 12:58:10.312243 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 12:58:10.312243 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:58:10.312243 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:58:10.312243 ignition[944]: INFO : files: files passed Jan 30 12:58:10.312243 ignition[944]: INFO : Ignition finished successfully Jan 30 12:58:10.312863 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:58:10.330367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:58:10.333257 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:58:10.337556 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:58:10.337670 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:58:10.341282 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:58:10.344517 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:58:10.344517 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:58:10.348053 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:58:10.346932 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:58:10.349852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:58:10.360635 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:58:10.387406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:58:10.387553 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:58:10.389961 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:58:10.392138 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:58:10.394132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:58:10.394982 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:58:10.412166 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:58:10.425254 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:58:10.436836 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:58:10.438188 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:58:10.440512 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:58:10.442503 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:58:10.442643 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:58:10.445395 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:58:10.447522 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:58:10.449331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:58:10.451076 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:58:10.453077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:58:10.455224 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:58:10.457197 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:58:10.459307 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:58:10.461391 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:58:10.463331 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:58:10.464889 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:58:10.465028 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:58:10.467510 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:58:10.469624 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:58:10.471641 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:58:10.471787 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:58:10.473823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:58:10.473964 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:58:10.476923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:58:10.477058 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:58:10.479078 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:58:10.480719 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:58:10.480837 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:58:10.482983 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:58:10.484934 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:58:10.486521 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:58:10.486617 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:58:10.488329 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:58:10.488411 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:58:10.490554 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:58:10.490669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:58:10.492366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:58:10.492467 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:58:10.504308 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:58:10.506012 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:58:10.506947 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:58:10.507102 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:58:10.509137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:58:10.509247 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:58:10.517300 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:58:10.518264 ignition[999]: INFO : Ignition 2.19.0 Jan 30 12:58:10.518264 ignition[999]: INFO : Stage: umount Jan 30 12:58:10.518264 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:58:10.518264 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:58:10.517412 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:58:10.529400 ignition[999]: INFO : umount: umount passed Jan 30 12:58:10.529400 ignition[999]: INFO : Ignition finished successfully Jan 30 12:58:10.521336 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:58:10.521476 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:58:10.524055 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:58:10.524539 systemd[1]: Stopped target network.target - Network. Jan 30 12:58:10.526001 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:58:10.526114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:58:10.528381 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:58:10.528430 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:58:10.530381 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:58:10.530436 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:58:10.532054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:58:10.532118 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:58:10.534128 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:58:10.535936 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:58:10.544107 systemd-networkd[762]: eth0: DHCPv6 lease lost Jan 30 12:58:10.545408 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:58:10.545519 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:58:10.548161 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:58:10.548306 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:58:10.552173 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:58:10.552232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:58:10.557274 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:58:10.559050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:58:10.559140 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:58:10.561264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:58:10.561315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:58:10.563323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:58:10.563370 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:58:10.565155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:58:10.565200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:58:10.567635 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:58:10.580124 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:58:10.580280 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:58:10.583789 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:58:10.583968 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:58:10.586585 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:58:10.586627 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:58:10.587885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:58:10.587918 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:58:10.590044 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:58:10.590125 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:58:10.592844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:58:10.592890 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:58:10.595790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:58:10.595840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:58:10.609295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:58:10.610469 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:58:10.610545 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:58:10.612760 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 12:58:10.612816 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:58:10.615129 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:58:10.615197 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:58:10.617545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:58:10.617598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:58:10.619996 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:58:10.620134 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:58:10.622078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:58:10.622170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:58:10.624612 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:58:10.626234 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:58:10.626311 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:58:10.629130 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:58:10.640408 systemd[1]: Switching root. Jan 30 12:58:10.662438 systemd-journald[238]: Journal stopped Jan 30 12:58:11.717758 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 12:58:11.724935 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:58:11.724958 kernel: SELinux: policy capability open_perms=1 Jan 30 12:58:11.724969 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:58:11.724979 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:58:11.724988 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:58:11.724999 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:58:11.725009 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:58:11.725018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:58:11.725032 kernel: audit: type=1403 audit(1738241891.027:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:58:11.725046 systemd[1]: Successfully loaded SELinux policy in 33.935ms. Jan 30 12:58:11.725083 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.556ms. Jan 30 12:58:11.725106 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:58:11.725176 systemd[1]: Detected virtualization kvm. Jan 30 12:58:11.725195 systemd[1]: Detected architecture arm64. Jan 30 12:58:11.725206 systemd[1]: Detected first boot. Jan 30 12:58:11.725217 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:58:11.725228 zram_generator::config[1042]: No configuration found. Jan 30 12:58:11.725244 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:58:11.725258 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 12:58:11.725269 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 12:58:11.725281 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 12:58:11.725293 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:58:11.725304 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:58:11.725315 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:58:11.725326 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:58:11.725337 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:58:11.725348 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:58:11.725360 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:58:11.725371 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:58:11.725381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:58:11.725393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:58:11.725416 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:58:11.725429 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:58:11.725440 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:58:11.725451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:58:11.725464 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:58:11.725475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:58:11.725486 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 12:58:11.725497 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 12:58:11.725508 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 12:58:11.725519 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:58:11.725530 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:58:11.725541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:58:11.725553 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:58:11.725564 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:58:11.725575 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:58:11.725586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:58:11.725597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:58:11.725608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:58:11.725619 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:58:11.725630 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:58:11.725640 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:58:11.725655 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:58:11.725666 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:58:11.725676 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:58:11.725687 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:58:11.725698 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:58:11.725710 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:58:11.725720 systemd[1]: Reached target machines.target - Containers. Jan 30 12:58:11.725732 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:58:11.725743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:58:11.725759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:58:11.725770 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:58:11.725781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:58:11.725792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:58:11.725802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:58:11.725813 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:58:11.725824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:58:11.725835 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:58:11.725847 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 12:58:11.725858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 12:58:11.725869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 12:58:11.725879 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 12:58:11.725891 kernel: fuse: init (API version 7.39) Jan 30 12:58:11.725901 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:58:11.725912 kernel: ACPI: bus type drm_connector registered Jan 30 12:58:11.725922 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:58:11.725934 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:58:11.725945 kernel: loop: module loaded Jan 30 12:58:11.725956 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:58:11.725969 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:58:11.725981 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 12:58:11.725994 systemd[1]: Stopped verity-setup.service. Jan 30 12:58:11.726007 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:58:11.726018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:58:11.726029 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:58:11.726042 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:58:11.726053 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:58:11.726064 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:58:11.726155 systemd-journald[1110]: Collecting audit messages is disabled. Jan 30 12:58:11.726178 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:58:11.726194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:58:11.726205 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:58:11.726216 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:58:11.726230 systemd-journald[1110]: Journal started Jan 30 12:58:11.726252 systemd-journald[1110]: Runtime Journal (/run/log/journal/51f9f90212d6471c990f602abd496c89) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:58:11.450439 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:58:11.468870 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:58:11.469314 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 12:58:11.728128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:58:11.729860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:58:11.730031 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:58:11.731863 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:58:11.732044 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:58:11.733648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:58:11.733817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:58:11.735524 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:58:11.735679 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:58:11.737356 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:58:11.737492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:58:11.739044 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:58:11.740523 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:58:11.742158 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:58:11.757835 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:58:11.773218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:58:11.775699 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:58:11.776951 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:58:11.777017 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:58:11.780248 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:58:11.782933 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:58:11.785476 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:58:11.786718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:58:11.788276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:58:11.790669 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:58:11.792087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:58:11.796295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:58:11.800014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:58:11.801366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:58:11.806384 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:58:11.809329 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:58:11.813214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:58:11.814863 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:58:11.818731 systemd-journald[1110]: Time spent on flushing to /var/log/journal/51f9f90212d6471c990f602abd496c89 is 31.204ms for 865 entries. Jan 30 12:58:11.818731 systemd-journald[1110]: System Journal (/var/log/journal/51f9f90212d6471c990f602abd496c89) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:58:11.880773 systemd-journald[1110]: Received client request to flush runtime journal. Jan 30 12:58:11.880872 kernel: loop0: detected capacity change from 0 to 194096 Jan 30 12:58:11.816271 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:58:11.819121 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:58:11.827366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:58:11.838585 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:58:11.875408 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:58:11.878581 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:58:11.895112 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:58:11.902608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:58:11.906632 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:58:11.915335 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 12:58:11.922501 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 30 12:58:11.922515 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 30 12:58:11.923426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:58:11.925559 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:58:11.927513 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:58:11.939440 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:58:11.940115 kernel: loop1: detected capacity change from 0 to 114328 Jan 30 12:58:11.982653 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:58:11.990196 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 12:58:11.990412 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:58:12.015479 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 30 12:58:12.015498 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 30 12:58:12.019957 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:58:12.028125 kernel: loop3: detected capacity change from 0 to 194096 Jan 30 12:58:12.036189 kernel: loop4: detected capacity change from 0 to 114328 Jan 30 12:58:12.043828 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 12:58:12.048643 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:58:12.049177 (sd-merge)[1180]: Merged extensions into '/usr'. Jan 30 12:58:12.053801 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:58:12.053996 systemd[1]: Reloading... Jan 30 12:58:12.132227 zram_generator::config[1207]: No configuration found. Jan 30 12:58:12.166502 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:58:12.257030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:58:12.299010 systemd[1]: Reloading finished in 244 ms. Jan 30 12:58:12.336050 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:58:12.345118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:58:12.347109 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:58:12.377315 systemd[1]: Starting ensure-sysext.service... Jan 30 12:58:12.379699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:58:12.382584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:58:12.391271 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:58:12.391291 systemd[1]: Reloading... Jan 30 12:58:12.404752 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:58:12.405040 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:58:12.405705 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:58:12.405923 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 30 12:58:12.405977 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 30 12:58:12.408756 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:58:12.408772 systemd-tmpfiles[1242]: Skipping /boot Jan 30 12:58:12.412184 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Jan 30 12:58:12.416680 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:58:12.416698 systemd-tmpfiles[1242]: Skipping /boot Jan 30 12:58:12.443447 zram_generator::config[1268]: No configuration found. Jan 30 12:58:12.498137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1284) Jan 30 12:58:12.560435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:58:12.612472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:58:12.613908 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 12:58:12.614323 systemd[1]: Reloading finished in 222 ms. Jan 30 12:58:12.629731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:58:12.643847 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:58:12.662119 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:58:12.663851 systemd[1]: Finished ensure-sysext.service. Jan 30 12:58:12.684402 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:58:12.687218 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:58:12.688586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:58:12.689808 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:58:12.693905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:58:12.699338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:58:12.704624 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:58:12.709241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:58:12.711557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:58:12.714302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:58:12.717708 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:58:12.723434 lvm[1338]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:58:12.729235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:58:12.734061 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:58:12.740566 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:58:12.746625 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:58:12.751280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:58:12.753527 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:58:12.755386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:58:12.755563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:58:12.758465 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:58:12.758617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:58:12.760240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:58:12.760373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:58:12.762189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:58:12.762375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:58:12.763860 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:58:12.767017 augenrules[1364]: No rules Jan 30 12:58:12.769011 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:58:12.770867 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:58:12.780479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:58:12.795398 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:58:12.796657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:58:12.796735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:58:12.798267 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:58:12.801041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:58:12.802583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:58:12.804913 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:58:12.805565 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:58:12.809651 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:58:12.812266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:58:12.819818 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:58:12.824240 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:58:12.841284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:58:12.891190 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:58:12.892801 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:58:12.894586 systemd-networkd[1355]: lo: Link UP Jan 30 12:58:12.894828 systemd-networkd[1355]: lo: Gained carrier Jan 30 12:58:12.895998 systemd-networkd[1355]: Enumeration completed Jan 30 12:58:12.896135 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:58:12.896892 systemd-networkd[1355]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:58:12.896895 systemd-networkd[1355]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:58:12.897732 systemd-networkd[1355]: eth0: Link UP Jan 30 12:58:12.897740 systemd-networkd[1355]: eth0: Gained carrier Jan 30 12:58:12.897753 systemd-networkd[1355]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:58:12.906307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:58:12.907903 systemd-resolved[1356]: Positive Trust Anchors: Jan 30 12:58:12.907925 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:58:12.907957 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:58:12.914331 systemd-networkd[1355]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:58:12.914824 systemd-resolved[1356]: Defaulting to hostname 'linux'. Jan 30 12:58:12.914974 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Jan 30 12:58:12.915868 systemd-timesyncd[1358]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:58:12.915924 systemd-timesyncd[1358]: Initial clock synchronization to Thu 2025-01-30 12:58:12.936215 UTC. Jan 30 12:58:12.916681 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:58:12.918010 systemd[1]: Reached target network.target - Network. Jan 30 12:58:12.919024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:58:12.920684 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:58:12.921937 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:58:12.923334 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:58:12.924941 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:58:12.926351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:58:12.927706 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:58:12.929319 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:58:12.929358 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:58:12.930290 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:58:12.932642 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:58:12.935231 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:58:12.945228 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:58:12.947455 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:58:12.948840 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:58:12.949898 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:58:12.951234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:58:12.951271 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:58:12.952887 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:58:12.955125 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:58:12.957210 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:58:12.960612 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:58:12.962594 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:58:12.964264 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:58:12.970583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 12:58:12.973307 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:58:12.980152 jq[1403]: false Jan 30 12:58:12.981549 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:58:12.988288 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:58:12.989432 extend-filesystems[1404]: Found loop3 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found loop4 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found loop5 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda1 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda2 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda3 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found usr Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda4 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda6 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda7 Jan 30 12:58:12.990532 extend-filesystems[1404]: Found vda9 Jan 30 12:58:12.990532 extend-filesystems[1404]: Checking size of /dev/vda9 Jan 30 12:58:13.000292 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:58:13.000796 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:58:13.001543 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:58:13.003701 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:58:13.007406 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:58:13.007583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:58:13.014929 extend-filesystems[1404]: Resized partition /dev/vda9 Jan 30 12:58:13.024248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1286) Jan 30 12:58:13.017005 dbus-daemon[1402]: [system] SELinux support is enabled Jan 30 12:58:13.015296 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:58:13.015458 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:58:13.023433 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:58:13.025240 jq[1422]: true Jan 30 12:58:13.028767 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:58:13.030099 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:58:13.034919 extend-filesystems[1427]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:58:13.044222 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:58:13.052664 tar[1424]: linux-arm64/helm Jan 30 12:58:13.059341 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:58:13.059420 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:58:13.060845 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:58:13.060869 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:58:13.082354 (ntainerd)[1431]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:58:13.089117 jq[1430]: true Jan 30 12:58:13.089828 systemd-logind[1410]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:58:13.095135 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:58:13.099272 systemd-logind[1410]: New seat seat0. Jan 30 12:58:13.125244 extend-filesystems[1427]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:58:13.125244 extend-filesystems[1427]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:58:13.125244 extend-filesystems[1427]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:58:13.103608 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:58:13.132863 update_engine[1421]: I20250130 12:58:13.125000 1421 main.cc:92] Flatcar Update Engine starting Jan 30 12:58:13.132863 update_engine[1421]: I20250130 12:58:13.128501 1421 update_check_scheduler.cc:74] Next update check in 8m52s Jan 30 12:58:13.133147 extend-filesystems[1404]: Resized filesystem in /dev/vda9 Jan 30 12:58:13.129554 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:58:13.131104 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:58:13.132810 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:58:13.148564 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:58:13.180354 bash[1455]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:58:13.181327 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:58:13.186269 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:58:13.198908 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:58:13.355106 containerd[1431]: time="2025-01-30T12:58:13.353699227Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 12:58:13.381367 containerd[1431]: time="2025-01-30T12:58:13.381294236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383140094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383181761Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383199572Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383379687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383399059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383460539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383472947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383642175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383660426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383675116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384275 containerd[1431]: time="2025-01-30T12:58:13.383684522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384566 containerd[1431]: time="2025-01-30T12:58:13.383756808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384566 containerd[1431]: time="2025-01-30T12:58:13.383950691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384566 containerd[1431]: time="2025-01-30T12:58:13.384053197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:58:13.384566 containerd[1431]: time="2025-01-30T12:58:13.384090541Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:58:13.384566 containerd[1431]: time="2025-01-30T12:58:13.384175675Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:58:13.384566 containerd[1431]: time="2025-01-30T12:58:13.384214620Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:58:13.388719 containerd[1431]: time="2025-01-30T12:58:13.388677068Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:58:13.388890 containerd[1431]: time="2025-01-30T12:58:13.388873753Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:58:13.388974 containerd[1431]: time="2025-01-30T12:58:13.388940676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:58:13.389082 containerd[1431]: time="2025-01-30T12:58:13.389051026Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:58:13.389152 containerd[1431]: time="2025-01-30T12:58:13.389138162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:58:13.389382 containerd[1431]: time="2025-01-30T12:58:13.389359383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:58:13.389752 containerd[1431]: time="2025-01-30T12:58:13.389720974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:58:13.389893 containerd[1431]: time="2025-01-30T12:58:13.389875232Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:58:13.389918 containerd[1431]: time="2025-01-30T12:58:13.389897046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:58:13.389937 containerd[1431]: time="2025-01-30T12:58:13.389912616Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:58:13.389973 containerd[1431]: time="2025-01-30T12:58:13.389941354Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.389973 containerd[1431]: time="2025-01-30T12:58:13.389955723Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.389973 containerd[1431]: time="2025-01-30T12:58:13.389969372Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.390030 containerd[1431]: time="2025-01-30T12:58:13.389985342Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.390030 containerd[1431]: time="2025-01-30T12:58:13.390001713Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.390030 containerd[1431]: time="2025-01-30T12:58:13.390019364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.390106 containerd[1431]: time="2025-01-30T12:58:13.390033133Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.390106 containerd[1431]: time="2025-01-30T12:58:13.390045421Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:58:13.390106 containerd[1431]: time="2025-01-30T12:58:13.390098575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390157 containerd[1431]: time="2025-01-30T12:58:13.390115385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390157 containerd[1431]: time="2025-01-30T12:58:13.390128674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390157 containerd[1431]: time="2025-01-30T12:58:13.390149567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390219 containerd[1431]: time="2025-01-30T12:58:13.390162696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390219 containerd[1431]: time="2025-01-30T12:58:13.390176184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390219 containerd[1431]: time="2025-01-30T12:58:13.390187752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390219 containerd[1431]: time="2025-01-30T12:58:13.390200600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390219 containerd[1431]: time="2025-01-30T12:58:13.390215169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390303 containerd[1431]: time="2025-01-30T12:58:13.390231179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390303 containerd[1431]: time="2025-01-30T12:58:13.390247510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390303 containerd[1431]: time="2025-01-30T12:58:13.390260918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390303 containerd[1431]: time="2025-01-30T12:58:13.390274887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390303 containerd[1431]: time="2025-01-30T12:58:13.390290937Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:58:13.390398 containerd[1431]: time="2025-01-30T12:58:13.390313432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390398 containerd[1431]: time="2025-01-30T12:58:13.390326760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390398 containerd[1431]: time="2025-01-30T12:58:13.390338848Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:58:13.390532 containerd[1431]: time="2025-01-30T12:58:13.390518523Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:58:13.390557 containerd[1431]: time="2025-01-30T12:58:13.390540817Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:58:13.390557 containerd[1431]: time="2025-01-30T12:58:13.390552865Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:58:13.390610 containerd[1431]: time="2025-01-30T12:58:13.390570636Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:58:13.390610 containerd[1431]: time="2025-01-30T12:58:13.390581003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.390610 containerd[1431]: time="2025-01-30T12:58:13.390594491Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:58:13.390610 containerd[1431]: time="2025-01-30T12:58:13.390605018Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:58:13.390679 containerd[1431]: time="2025-01-30T12:58:13.390615985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:58:13.391192 containerd[1431]: time="2025-01-30T12:58:13.391124389Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:58:13.391322 containerd[1431]: time="2025-01-30T12:58:13.391204040Z" level=info msg="Connect containerd service" Jan 30 12:58:13.391322 containerd[1431]: time="2025-01-30T12:58:13.391235300Z" level=info msg="using legacy CRI server" Jan 30 12:58:13.391322 containerd[1431]: time="2025-01-30T12:58:13.391246867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:58:13.391392 containerd[1431]: time="2025-01-30T12:58:13.391344810Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:58:13.392174 containerd[1431]: time="2025-01-30T12:58:13.392141558Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:58:13.392482 containerd[1431]: time="2025-01-30T12:58:13.392424659Z" level=info msg="Start subscribing containerd event" Jan 30 12:58:13.395083 containerd[1431]: time="2025-01-30T12:58:13.393810383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:58:13.395083 containerd[1431]: time="2025-01-30T12:58:13.393906445Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:58:13.395988 containerd[1431]: time="2025-01-30T12:58:13.395951270Z" level=info msg="Start recovering state" Jan 30 12:58:13.396195 containerd[1431]: time="2025-01-30T12:58:13.396174012Z" level=info msg="Start event monitor" Jan 30 12:58:13.396264 containerd[1431]: time="2025-01-30T12:58:13.396252942Z" level=info msg="Start snapshots syncer" Jan 30 12:58:13.396364 containerd[1431]: time="2025-01-30T12:58:13.396302534Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:58:13.396551 containerd[1431]: time="2025-01-30T12:58:13.396536563Z" level=info msg="Start streaming server" Jan 30 12:58:13.396947 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:58:13.399215 containerd[1431]: time="2025-01-30T12:58:13.399181771Z" level=info msg="containerd successfully booted in 0.046976s" Jan 30 12:58:13.433183 tar[1424]: linux-arm64/LICENSE Jan 30 12:58:13.433405 tar[1424]: linux-arm64/README.md Jan 30 12:58:13.447610 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 12:58:14.141424 systemd-networkd[1355]: eth0: Gained IPv6LL Jan 30 12:58:14.144686 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:58:14.147292 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:58:14.164428 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:58:14.168101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:58:14.171232 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:58:14.199490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:58:14.202207 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:58:14.204118 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:58:14.208243 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:58:14.575716 sshd_keygen[1419]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:58:14.599571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:58:14.615397 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:58:14.624461 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:58:14.625738 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:58:14.631403 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:58:14.676047 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:58:14.685429 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:58:14.688045 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:58:14.689644 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:58:14.780907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:14.782775 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:58:14.785382 (kubelet)[1513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:58:14.788659 systemd[1]: Startup finished in 681ms (kernel) + 5.288s (initrd) + 3.797s (userspace) = 9.766s. Jan 30 12:58:15.347693 kubelet[1513]: E0130 12:58:15.347635 1513 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:58:15.349720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:58:15.349856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:58:19.020289 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:58:19.022125 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:33740.service - OpenSSH per-connection server daemon (10.0.0.1:33740). Jan 30 12:58:19.157091 sshd[1527]: Accepted publickey for core from 10.0.0.1 port 33740 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:19.159149 sshd[1527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:19.167668 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:58:19.178412 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:58:19.180188 systemd-logind[1410]: New session 1 of user core. Jan 30 12:58:19.198565 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:58:19.209400 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:58:19.212520 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:58:19.290670 systemd[1531]: Queued start job for default target default.target. Jan 30 12:58:19.301956 systemd[1531]: Created slice app.slice - User Application Slice. Jan 30 12:58:19.302014 systemd[1531]: Reached target paths.target - Paths. Jan 30 12:58:19.302028 systemd[1531]: Reached target timers.target - Timers. Jan 30 12:58:19.303502 systemd[1531]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:58:19.318924 systemd[1531]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:58:19.319052 systemd[1531]: Reached target sockets.target - Sockets. Jan 30 12:58:19.319085 systemd[1531]: Reached target basic.target - Basic System. Jan 30 12:58:19.319129 systemd[1531]: Reached target default.target - Main User Target. Jan 30 12:58:19.319158 systemd[1531]: Startup finished in 99ms. Jan 30 12:58:19.319429 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:58:19.321241 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:58:19.387637 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:33756.service - OpenSSH per-connection server daemon (10.0.0.1:33756). Jan 30 12:58:19.435002 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 33756 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:19.436489 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:19.441878 systemd-logind[1410]: New session 2 of user core. Jan 30 12:58:19.449260 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:58:19.503859 sshd[1542]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:19.516680 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:33756.service: Deactivated successfully. Jan 30 12:58:19.519635 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:58:19.521229 systemd-logind[1410]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:58:19.522636 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:33758.service - OpenSSH per-connection server daemon (10.0.0.1:33758). Jan 30 12:58:19.524060 systemd-logind[1410]: Removed session 2. Jan 30 12:58:19.557023 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 33758 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:19.558308 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:19.562118 systemd-logind[1410]: New session 3 of user core. Jan 30 12:58:19.573282 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:58:19.620904 sshd[1549]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:19.629627 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:33758.service: Deactivated successfully. Jan 30 12:58:19.631035 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:58:19.631676 systemd-logind[1410]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:58:19.649403 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Jan 30 12:58:19.650452 systemd-logind[1410]: Removed session 3. Jan 30 12:58:19.680012 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:19.681352 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:19.685472 systemd-logind[1410]: New session 4 of user core. Jan 30 12:58:19.695243 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:58:19.747509 sshd[1556]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:19.757893 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:33768.service: Deactivated successfully. Jan 30 12:58:19.759651 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:58:19.762398 systemd-logind[1410]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:58:19.775409 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). Jan 30 12:58:19.776230 systemd-logind[1410]: Removed session 4. Jan 30 12:58:19.810965 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:19.812750 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:19.816440 systemd-logind[1410]: New session 5 of user core. Jan 30 12:58:19.823265 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:58:19.884438 sudo[1566]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:58:19.884737 sudo[1566]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:58:19.900042 sudo[1566]: pam_unix(sudo:session): session closed for user root Jan 30 12:58:19.902957 sshd[1563]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:19.917924 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:33780.service: Deactivated successfully. Jan 30 12:58:19.921295 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:58:19.922666 systemd-logind[1410]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:58:19.924260 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:33788.service - OpenSSH per-connection server daemon (10.0.0.1:33788). Jan 30 12:58:19.925098 systemd-logind[1410]: Removed session 5. Jan 30 12:58:19.958725 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 33788 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:19.960226 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:19.964659 systemd-logind[1410]: New session 6 of user core. Jan 30 12:58:19.977296 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:58:20.029550 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:58:20.029832 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:58:20.033028 sudo[1575]: pam_unix(sudo:session): session closed for user root Jan 30 12:58:20.037972 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 12:58:20.038293 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:58:20.058432 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 12:58:20.059673 auditctl[1578]: No rules Jan 30 12:58:20.060625 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:58:20.060855 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 12:58:20.062975 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:58:20.088041 augenrules[1596]: No rules Jan 30 12:58:20.089342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:58:20.090705 sudo[1574]: pam_unix(sudo:session): session closed for user root Jan 30 12:58:20.092205 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:20.108580 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:33788.service: Deactivated successfully. Jan 30 12:58:20.109981 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:58:20.111224 systemd-logind[1410]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:58:20.112417 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:33802.service - OpenSSH per-connection server daemon (10.0.0.1:33802). Jan 30 12:58:20.113200 systemd-logind[1410]: Removed session 6. Jan 30 12:58:20.153359 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 33802 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:58:20.154589 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:20.160820 systemd-logind[1410]: New session 7 of user core. Jan 30 12:58:20.170245 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:58:20.220814 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:58:20.221122 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:58:20.554751 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 12:58:20.554986 (dockerd)[1626]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 12:58:20.822293 dockerd[1626]: time="2025-01-30T12:58:20.822160437Z" level=info msg="Starting up" Jan 30 12:58:20.967644 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2984676618-merged.mount: Deactivated successfully. Jan 30 12:58:20.990903 dockerd[1626]: time="2025-01-30T12:58:20.990808583Z" level=info msg="Loading containers: start." Jan 30 12:58:21.083091 kernel: Initializing XFRM netlink socket Jan 30 12:58:21.155569 systemd-networkd[1355]: docker0: Link UP Jan 30 12:58:21.176594 dockerd[1626]: time="2025-01-30T12:58:21.176549023Z" level=info msg="Loading containers: done." Jan 30 12:58:21.188513 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2392322443-merged.mount: Deactivated successfully. Jan 30 12:58:21.191288 dockerd[1626]: time="2025-01-30T12:58:21.190788193Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 12:58:21.191288 dockerd[1626]: time="2025-01-30T12:58:21.190921139Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 12:58:21.191288 dockerd[1626]: time="2025-01-30T12:58:21.191050162Z" level=info msg="Daemon has completed initialization" Jan 30 12:58:21.227591 dockerd[1626]: time="2025-01-30T12:58:21.227434015Z" level=info msg="API listen on /run/docker.sock" Jan 30 12:58:21.227895 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 12:58:21.860591 containerd[1431]: time="2025-01-30T12:58:21.860544135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 12:58:22.519901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773632382.mount: Deactivated successfully. Jan 30 12:58:23.385865 containerd[1431]: time="2025-01-30T12:58:23.385792362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:23.387367 containerd[1431]: time="2025-01-30T12:58:23.387323474Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 30 12:58:23.388347 containerd[1431]: time="2025-01-30T12:58:23.388322218Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:23.391282 containerd[1431]: time="2025-01-30T12:58:23.391229409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:23.392577 containerd[1431]: time="2025-01-30T12:58:23.392517047Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.531924928s" Jan 30 12:58:23.392577 containerd[1431]: time="2025-01-30T12:58:23.392549342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 12:58:23.412327 containerd[1431]: time="2025-01-30T12:58:23.412276988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 12:58:25.060762 containerd[1431]: time="2025-01-30T12:58:25.060697768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:25.068167 containerd[1431]: time="2025-01-30T12:58:25.068120204Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 30 12:58:25.072832 containerd[1431]: time="2025-01-30T12:58:25.072782758Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:25.081198 containerd[1431]: time="2025-01-30T12:58:25.081148085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:25.082202 containerd[1431]: time="2025-01-30T12:58:25.082160127Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.669838118s" Jan 30 12:58:25.082289 containerd[1431]: time="2025-01-30T12:58:25.082199184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 12:58:25.103001 containerd[1431]: time="2025-01-30T12:58:25.102949152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 12:58:25.600183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:58:25.612295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:58:25.714478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:25.718679 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:58:25.770800 kubelet[1861]: E0130 12:58:25.770744 1861 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:58:25.773582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:58:25.773708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:58:26.330206 containerd[1431]: time="2025-01-30T12:58:26.329814744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:26.332349 containerd[1431]: time="2025-01-30T12:58:26.332010311Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 30 12:58:26.333916 containerd[1431]: time="2025-01-30T12:58:26.333856771Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:26.337150 containerd[1431]: time="2025-01-30T12:58:26.336464833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:26.338590 containerd[1431]: time="2025-01-30T12:58:26.338542551Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.23555202s" Jan 30 12:58:26.338590 containerd[1431]: time="2025-01-30T12:58:26.338581007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 12:58:26.358710 containerd[1431]: time="2025-01-30T12:58:26.358652126Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 12:58:27.401373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812600352.mount: Deactivated successfully. Jan 30 12:58:27.596140 containerd[1431]: time="2025-01-30T12:58:27.596084687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:27.597481 containerd[1431]: time="2025-01-30T12:58:27.597388501Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 12:58:27.598341 containerd[1431]: time="2025-01-30T12:58:27.598300034Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:27.600403 containerd[1431]: time="2025-01-30T12:58:27.600361277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:27.601172 containerd[1431]: time="2025-01-30T12:58:27.601134274Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.242426724s" Jan 30 12:58:27.601227 containerd[1431]: time="2025-01-30T12:58:27.601176051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 12:58:27.620957 containerd[1431]: time="2025-01-30T12:58:27.620898762Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 12:58:28.134037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128431025.mount: Deactivated successfully. Jan 30 12:58:28.774051 containerd[1431]: time="2025-01-30T12:58:28.773988808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:28.774598 containerd[1431]: time="2025-01-30T12:58:28.774547029Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 12:58:28.775436 containerd[1431]: time="2025-01-30T12:58:28.775378719Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:28.778574 containerd[1431]: time="2025-01-30T12:58:28.778543734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:28.779747 containerd[1431]: time="2025-01-30T12:58:28.779705194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.158761494s" Jan 30 12:58:28.779792 containerd[1431]: time="2025-01-30T12:58:28.779744970Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 12:58:28.798604 containerd[1431]: time="2025-01-30T12:58:28.798553947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 12:58:29.233795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113982255.mount: Deactivated successfully. Jan 30 12:58:29.241544 containerd[1431]: time="2025-01-30T12:58:29.241491174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:29.242320 containerd[1431]: time="2025-01-30T12:58:29.242271714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 30 12:58:29.243103 containerd[1431]: time="2025-01-30T12:58:29.243010238Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:29.246676 containerd[1431]: time="2025-01-30T12:58:29.246600977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:29.247562 containerd[1431]: time="2025-01-30T12:58:29.247478394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 448.877428ms" Jan 30 12:58:29.247562 containerd[1431]: time="2025-01-30T12:58:29.247521330Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 12:58:29.266015 containerd[1431]: time="2025-01-30T12:58:29.265976739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 12:58:29.834542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956186063.mount: Deactivated successfully. Jan 30 12:58:31.320109 containerd[1431]: time="2025-01-30T12:58:31.320036361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:31.320621 containerd[1431]: time="2025-01-30T12:58:31.320589040Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 30 12:58:31.321620 containerd[1431]: time="2025-01-30T12:58:31.321582518Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:31.324862 containerd[1431]: time="2025-01-30T12:58:31.324800438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:31.326140 containerd[1431]: time="2025-01-30T12:58:31.326104828Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.060087075s" Jan 30 12:58:31.326204 containerd[1431]: time="2025-01-30T12:58:31.326144202Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 12:58:34.902765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:34.915362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:58:34.935189 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Jan 30 12:58:34.935209 systemd[1]: Reloading... Jan 30 12:58:35.011307 zram_generator::config[2119]: No configuration found. Jan 30 12:58:35.109928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:58:35.169761 systemd[1]: Reloading finished in 234 ms. Jan 30 12:58:35.215469 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:58:35.215552 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:58:35.215809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:35.218111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:58:35.316820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:35.321900 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:58:35.367764 kubelet[2165]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:58:35.367764 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:58:35.367764 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:58:35.368162 kubelet[2165]: I0130 12:58:35.367968 2165 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:58:36.020403 kubelet[2165]: I0130 12:58:36.020366 2165 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:58:36.020403 kubelet[2165]: I0130 12:58:36.020397 2165 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:58:36.020642 kubelet[2165]: I0130 12:58:36.020627 2165 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:58:36.053122 kubelet[2165]: I0130 12:58:36.053064 2165 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:58:36.053320 kubelet[2165]: E0130 12:58:36.053285 2165 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.062902 kubelet[2165]: I0130 12:58:36.062855 2165 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:58:36.063741 kubelet[2165]: I0130 12:58:36.063679 2165 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:58:36.063893 kubelet[2165]: I0130 12:58:36.063716 2165 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:58:36.064094 kubelet[2165]: I0130 12:58:36.064083 2165 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:58:36.064094 kubelet[2165]: I0130 12:58:36.064095 2165 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:58:36.064604 kubelet[2165]: I0130 12:58:36.064577 2165 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:58:36.066508 kubelet[2165]: I0130 12:58:36.066468 2165 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:58:36.066508 kubelet[2165]: I0130 12:58:36.066501 2165 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:58:36.066884 kubelet[2165]: W0130 12:58:36.066819 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.066884 kubelet[2165]: E0130 12:58:36.066885 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.067135 kubelet[2165]: I0130 12:58:36.067125 2165 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:58:36.067624 kubelet[2165]: I0130 12:58:36.067607 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:58:36.068480 kubelet[2165]: W0130 12:58:36.068431 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.068534 kubelet[2165]: E0130 12:58:36.068484 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.070150 kubelet[2165]: I0130 12:58:36.070109 2165 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:58:36.070848 kubelet[2165]: I0130 12:58:36.070812 2165 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:58:36.071114 kubelet[2165]: W0130 12:58:36.071095 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:58:36.072459 kubelet[2165]: I0130 12:58:36.072420 2165 server.go:1264] "Started kubelet" Jan 30 12:58:36.073063 kubelet[2165]: I0130 12:58:36.073014 2165 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:58:36.074597 kubelet[2165]: I0130 12:58:36.074562 2165 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:58:36.076525 kubelet[2165]: I0130 12:58:36.076450 2165 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:58:36.076775 kubelet[2165]: I0130 12:58:36.076742 2165 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:58:36.080098 kubelet[2165]: I0130 12:58:36.079866 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:58:36.080922 kubelet[2165]: E0130 12:58:36.080483 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f79d025307050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:58:36.072390736 +0000 UTC m=+0.747081397,LastTimestamp:2025-01-30 12:58:36.072390736 +0000 UTC m=+0.747081397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:58:36.084489 kubelet[2165]: I0130 12:58:36.083000 2165 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:58:36.084489 kubelet[2165]: I0130 12:58:36.083831 2165 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:58:36.084736 kubelet[2165]: I0130 12:58:36.084715 2165 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:58:36.085754 kubelet[2165]: E0130 12:58:36.085707 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Jan 30 12:58:36.086077 kubelet[2165]: W0130 12:58:36.086014 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.086077 kubelet[2165]: E0130 12:58:36.086086 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.088174 kubelet[2165]: I0130 12:58:36.088145 2165 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:58:36.088174 kubelet[2165]: I0130 12:58:36.088167 2165 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:58:36.088298 kubelet[2165]: I0130 12:58:36.088234 2165 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:58:36.089100 kubelet[2165]: E0130 12:58:36.089055 2165 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:58:36.103730 kubelet[2165]: I0130 12:58:36.103704 2165 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:58:36.103730 kubelet[2165]: I0130 12:58:36.103721 2165 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:58:36.103730 kubelet[2165]: I0130 12:58:36.103740 2165 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:58:36.108274 kubelet[2165]: I0130 12:58:36.108195 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:58:36.110148 kubelet[2165]: I0130 12:58:36.109314 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:58:36.110148 kubelet[2165]: I0130 12:58:36.109488 2165 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:58:36.110148 kubelet[2165]: I0130 12:58:36.109507 2165 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:58:36.110148 kubelet[2165]: E0130 12:58:36.109555 2165 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:58:36.111190 kubelet[2165]: W0130 12:58:36.111136 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.111330 kubelet[2165]: E0130 12:58:36.111313 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:36.181002 kubelet[2165]: I0130 12:58:36.180955 2165 policy_none.go:49] "None policy: Start" Jan 30 12:58:36.181916 kubelet[2165]: I0130 12:58:36.181880 2165 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:58:36.182368 kubelet[2165]: I0130 12:58:36.181976 2165 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:58:36.184940 kubelet[2165]: I0130 12:58:36.184915 2165 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:58:36.185421 kubelet[2165]: E0130 12:58:36.185390 2165 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 30 12:58:36.190931 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 12:58:36.202760 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 12:58:36.205818 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 12:58:36.209654 kubelet[2165]: E0130 12:58:36.209617 2165 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 12:58:36.214044 kubelet[2165]: I0130 12:58:36.213992 2165 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:58:36.214312 kubelet[2165]: I0130 12:58:36.214254 2165 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:58:36.214752 kubelet[2165]: I0130 12:58:36.214372 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:58:36.216285 kubelet[2165]: E0130 12:58:36.216267 2165 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 12:58:36.287147 kubelet[2165]: E0130 12:58:36.287024 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Jan 30 12:58:36.386751 kubelet[2165]: I0130 12:58:36.386714 2165 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:58:36.387123 kubelet[2165]: E0130 12:58:36.387037 2165 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 30 12:58:36.410346 kubelet[2165]: I0130 12:58:36.410295 2165 topology_manager.go:215] "Topology Admit Handler" podUID="e7df7801ff95b77a9d9d9bc9928ccdaa" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:58:36.411489 kubelet[2165]: I0130 12:58:36.411458 2165 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:58:36.412759 kubelet[2165]: I0130 12:58:36.412726 2165 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:58:36.419769 systemd[1]: Created slice kubepods-burstable-pode7df7801ff95b77a9d9d9bc9928ccdaa.slice - libcontainer container kubepods-burstable-pode7df7801ff95b77a9d9d9bc9928ccdaa.slice. Jan 30 12:58:36.432102 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 30 12:58:36.452100 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 30 12:58:36.486206 kubelet[2165]: I0130 12:58:36.486167 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7df7801ff95b77a9d9d9bc9928ccdaa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7df7801ff95b77a9d9d9bc9928ccdaa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:36.486206 kubelet[2165]: I0130 12:58:36.486206 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7df7801ff95b77a9d9d9bc9928ccdaa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e7df7801ff95b77a9d9d9bc9928ccdaa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:36.486350 kubelet[2165]: I0130 12:58:36.486230 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:36.486350 kubelet[2165]: I0130 12:58:36.486246 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:36.486350 kubelet[2165]: I0130 12:58:36.486270 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:36.486350 kubelet[2165]: I0130 12:58:36.486296 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:36.486350 kubelet[2165]: I0130 12:58:36.486313 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7df7801ff95b77a9d9d9bc9928ccdaa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7df7801ff95b77a9d9d9bc9928ccdaa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:36.486445 kubelet[2165]: I0130 12:58:36.486328 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:36.486445 kubelet[2165]: I0130 12:58:36.486345 2165 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:58:36.690292 kubelet[2165]: E0130 12:58:36.690175 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Jan 30 12:58:36.729547 kubelet[2165]: E0130 12:58:36.729511 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:36.730327 containerd[1431]: time="2025-01-30T12:58:36.730272595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e7df7801ff95b77a9d9d9bc9928ccdaa,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:36.750000 kubelet[2165]: E0130 12:58:36.749953 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:36.750564 containerd[1431]: time="2025-01-30T12:58:36.750515101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:36.755001 kubelet[2165]: E0130 12:58:36.754945 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:36.755451 containerd[1431]: time="2025-01-30T12:58:36.755417688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:36.789116 kubelet[2165]: I0130 12:58:36.788919 2165 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:58:36.789364 kubelet[2165]: E0130 12:58:36.789331 2165 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 30 12:58:37.348677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462810946.mount: Deactivated successfully. Jan 30 12:58:37.373607 containerd[1431]: time="2025-01-30T12:58:37.373538980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:58:37.374247 kubelet[2165]: W0130 12:58:37.374175 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.374247 kubelet[2165]: E0130 12:58:37.374224 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.380708 containerd[1431]: time="2025-01-30T12:58:37.380632534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:58:37.381577 containerd[1431]: time="2025-01-30T12:58:37.381539844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:58:37.382553 containerd[1431]: time="2025-01-30T12:58:37.382510373Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:58:37.382900 containerd[1431]: time="2025-01-30T12:58:37.382875442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:58:37.383958 containerd[1431]: time="2025-01-30T12:58:37.383871178Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:58:37.384954 containerd[1431]: time="2025-01-30T12:58:37.384864074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:58:37.390905 containerd[1431]: time="2025-01-30T12:58:37.390842896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:58:37.392490 containerd[1431]: time="2025-01-30T12:58:37.392448134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 641.82244ms" Jan 30 12:58:37.394914 containerd[1431]: time="2025-01-30T12:58:37.394879178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 639.386307ms" Jan 30 12:58:37.395664 containerd[1431]: time="2025-01-30T12:58:37.395483959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.123417ms" Jan 30 12:58:37.494054 kubelet[2165]: E0130 12:58:37.494006 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Jan 30 12:58:37.582923 kubelet[2165]: W0130 12:58:37.582853 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.582923 kubelet[2165]: E0130 12:58:37.582927 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.594523 kubelet[2165]: W0130 12:58:37.589340 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.594523 kubelet[2165]: E0130 12:58:37.589400 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.606816 kubelet[2165]: I0130 12:58:37.606385 2165 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:58:37.606816 kubelet[2165]: E0130 12:58:37.606697 2165 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 30 12:58:37.615219 kubelet[2165]: W0130 12:58:37.615174 2165 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.615219 kubelet[2165]: E0130 12:58:37.615225 2165 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 30 12:58:37.643487 containerd[1431]: time="2025-01-30T12:58:37.643369617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:37.643487 containerd[1431]: time="2025-01-30T12:58:37.643448120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:37.643487 containerd[1431]: time="2025-01-30T12:58:37.643469727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:37.643687 containerd[1431]: time="2025-01-30T12:58:37.643571477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:37.645228 containerd[1431]: time="2025-01-30T12:58:37.645063282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:37.645228 containerd[1431]: time="2025-01-30T12:58:37.645139024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:37.645228 containerd[1431]: time="2025-01-30T12:58:37.645155749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:37.645390 containerd[1431]: time="2025-01-30T12:58:37.645240294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:37.647039 containerd[1431]: time="2025-01-30T12:58:37.646905590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:37.647039 containerd[1431]: time="2025-01-30T12:58:37.646971130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:37.647039 containerd[1431]: time="2025-01-30T12:58:37.646982453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:37.647275 containerd[1431]: time="2025-01-30T12:58:37.647114573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:37.680350 systemd[1]: Started cri-containerd-3be8bea2ba0f7897b796ae6e5aff291d3b094f34d70ed34d807e0d06673e2346.scope - libcontainer container 3be8bea2ba0f7897b796ae6e5aff291d3b094f34d70ed34d807e0d06673e2346. Jan 30 12:58:37.681672 systemd[1]: Started cri-containerd-fc4bb042df478f4577a11570af933284d019f4121fa9ca054bbe06b30b8b8744.scope - libcontainer container fc4bb042df478f4577a11570af933284d019f4121fa9ca054bbe06b30b8b8744. Jan 30 12:58:37.685061 systemd[1]: Started cri-containerd-d87d9b5700d24387fd97914ad0855c9057428ff08e35ba557d713279d0a7fe2b.scope - libcontainer container d87d9b5700d24387fd97914ad0855c9057428ff08e35ba557d713279d0a7fe2b. Jan 30 12:58:37.730756 containerd[1431]: time="2025-01-30T12:58:37.723974593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d87d9b5700d24387fd97914ad0855c9057428ff08e35ba557d713279d0a7fe2b\"" Jan 30 12:58:37.731570 containerd[1431]: time="2025-01-30T12:58:37.731426174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3be8bea2ba0f7897b796ae6e5aff291d3b094f34d70ed34d807e0d06673e2346\"" Jan 30 12:58:37.742818 kubelet[2165]: E0130 12:58:37.733275 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:37.742818 kubelet[2165]: E0130 12:58:37.733971 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:37.742963 containerd[1431]: time="2025-01-30T12:58:37.738829579Z" level=info msg="CreateContainer within sandbox \"3be8bea2ba0f7897b796ae6e5aff291d3b094f34d70ed34d807e0d06673e2346\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 12:58:37.742963 containerd[1431]: time="2025-01-30T12:58:37.739300920Z" level=info msg="CreateContainer within sandbox \"d87d9b5700d24387fd97914ad0855c9057428ff08e35ba557d713279d0a7fe2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 12:58:37.745322 containerd[1431]: time="2025-01-30T12:58:37.745277981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e7df7801ff95b77a9d9d9bc9928ccdaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc4bb042df478f4577a11570af933284d019f4121fa9ca054bbe06b30b8b8744\"" Jan 30 12:58:37.746709 kubelet[2165]: E0130 12:58:37.746668 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:37.754804 containerd[1431]: time="2025-01-30T12:58:37.754757645Z" level=info msg="CreateContainer within sandbox \"fc4bb042df478f4577a11570af933284d019f4121fa9ca054bbe06b30b8b8744\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 12:58:37.764459 containerd[1431]: time="2025-01-30T12:58:37.764393396Z" level=info msg="CreateContainer within sandbox \"d87d9b5700d24387fd97914ad0855c9057428ff08e35ba557d713279d0a7fe2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e94bd802de30f20a4b33ea6c0cffa0328b233c25dfd02884a4049d2d9b88732\"" Jan 30 12:58:37.765116 containerd[1431]: time="2025-01-30T12:58:37.765092885Z" level=info msg="StartContainer for \"4e94bd802de30f20a4b33ea6c0cffa0328b233c25dfd02884a4049d2d9b88732\"" Jan 30 12:58:37.770953 containerd[1431]: time="2025-01-30T12:58:37.770902376Z" level=info msg="CreateContainer within sandbox \"3be8bea2ba0f7897b796ae6e5aff291d3b094f34d70ed34d807e0d06673e2346\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"36d7a49174216211268961226a024695747e4e62a6fa5d328a975326037b8cc4\"" Jan 30 12:58:37.771540 containerd[1431]: time="2025-01-30T12:58:37.771496673Z" level=info msg="StartContainer for \"36d7a49174216211268961226a024695747e4e62a6fa5d328a975326037b8cc4\"" Jan 30 12:58:37.780006 containerd[1431]: time="2025-01-30T12:58:37.779945070Z" level=info msg="CreateContainer within sandbox \"fc4bb042df478f4577a11570af933284d019f4121fa9ca054bbe06b30b8b8744\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"37a4d899273eb28dd4e2041e35a268c1ab3d4d32fca9597bee10208ef8f942a0\"" Jan 30 12:58:37.782098 containerd[1431]: time="2025-01-30T12:58:37.781036275Z" level=info msg="StartContainer for \"37a4d899273eb28dd4e2041e35a268c1ab3d4d32fca9597bee10208ef8f942a0\"" Jan 30 12:58:37.795299 systemd[1]: Started cri-containerd-4e94bd802de30f20a4b33ea6c0cffa0328b233c25dfd02884a4049d2d9b88732.scope - libcontainer container 4e94bd802de30f20a4b33ea6c0cffa0328b233c25dfd02884a4049d2d9b88732. Jan 30 12:58:37.798664 systemd[1]: Started cri-containerd-36d7a49174216211268961226a024695747e4e62a6fa5d328a975326037b8cc4.scope - libcontainer container 36d7a49174216211268961226a024695747e4e62a6fa5d328a975326037b8cc4. Jan 30 12:58:37.819337 systemd[1]: Started cri-containerd-37a4d899273eb28dd4e2041e35a268c1ab3d4d32fca9597bee10208ef8f942a0.scope - libcontainer container 37a4d899273eb28dd4e2041e35a268c1ab3d4d32fca9597bee10208ef8f942a0. Jan 30 12:58:37.881273 containerd[1431]: time="2025-01-30T12:58:37.880513835Z" level=info msg="StartContainer for \"36d7a49174216211268961226a024695747e4e62a6fa5d328a975326037b8cc4\" returns successfully" Jan 30 12:58:37.881523 containerd[1431]: time="2025-01-30T12:58:37.881318434Z" level=info msg="StartContainer for \"4e94bd802de30f20a4b33ea6c0cffa0328b233c25dfd02884a4049d2d9b88732\" returns successfully" Jan 30 12:58:37.881940 containerd[1431]: time="2025-01-30T12:58:37.881323876Z" level=info msg="StartContainer for \"37a4d899273eb28dd4e2041e35a268c1ab3d4d32fca9597bee10208ef8f942a0\" returns successfully" Jan 30 12:58:38.119117 kubelet[2165]: E0130 12:58:38.118536 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:38.119629 kubelet[2165]: E0130 12:58:38.119464 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:38.122148 kubelet[2165]: E0130 12:58:38.122125 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:39.125710 kubelet[2165]: E0130 12:58:39.125677 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:39.208659 kubelet[2165]: I0130 12:58:39.208624 2165 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:58:39.536518 kubelet[2165]: E0130 12:58:39.536340 2165 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 12:58:39.955425 kubelet[2165]: I0130 12:58:39.955288 2165 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:58:40.070177 kubelet[2165]: I0130 12:58:40.070136 2165 apiserver.go:52] "Watching apiserver" Jan 30 12:58:40.083881 kubelet[2165]: I0130 12:58:40.083838 2165 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:58:40.135180 kubelet[2165]: E0130 12:58:40.135140 2165 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:40.135653 kubelet[2165]: E0130 12:58:40.135616 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:41.782325 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-7.scope)... Jan 30 12:58:41.782342 systemd[1]: Reloading... Jan 30 12:58:41.842270 zram_generator::config[2478]: No configuration found. Jan 30 12:58:41.945555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:58:42.019867 systemd[1]: Reloading finished in 237 ms. Jan 30 12:58:42.057254 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:58:42.072192 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:58:42.072433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:42.072504 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 117.3M memory peak, 0B memory swap peak. Jan 30 12:58:42.087386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:58:42.188302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:58:42.193180 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:58:42.252534 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:58:42.252534 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:58:42.252534 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:58:42.253294 kubelet[2520]: I0130 12:58:42.252927 2520 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:58:42.258535 kubelet[2520]: I0130 12:58:42.258255 2520 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:58:42.258535 kubelet[2520]: I0130 12:58:42.258288 2520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:58:42.258824 kubelet[2520]: I0130 12:58:42.258570 2520 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:58:42.260137 kubelet[2520]: I0130 12:58:42.260111 2520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 12:58:42.261707 kubelet[2520]: I0130 12:58:42.261601 2520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:58:42.269682 kubelet[2520]: I0130 12:58:42.269650 2520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:58:42.269883 kubelet[2520]: I0130 12:58:42.269848 2520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:58:42.270059 kubelet[2520]: I0130 12:58:42.269884 2520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:58:42.270146 kubelet[2520]: I0130 12:58:42.270077 2520 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:58:42.270146 kubelet[2520]: I0130 12:58:42.270089 2520 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:58:42.270146 kubelet[2520]: I0130 12:58:42.270123 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:58:42.270246 kubelet[2520]: I0130 12:58:42.270232 2520 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:58:42.270275 kubelet[2520]: I0130 12:58:42.270248 2520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:58:42.270295 kubelet[2520]: I0130 12:58:42.270277 2520 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:58:42.271533 kubelet[2520]: I0130 12:58:42.270782 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.272195 2520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.272390 2520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.273582 2520 server.go:1264] "Started kubelet" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.274578 2520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.275178 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.275482 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.275768 2520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.277358 2520 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.277464 2520 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:58:42.272793 kubelet[2520]: I0130 12:58:42.277632 2520 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:58:42.279433 kubelet[2520]: E0130 12:58:42.279407 2520 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:58:42.280311 kubelet[2520]: I0130 12:58:42.279906 2520 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:58:42.280311 kubelet[2520]: I0130 12:58:42.279951 2520 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:58:42.280311 kubelet[2520]: I0130 12:58:42.280017 2520 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:58:42.280943 kubelet[2520]: I0130 12:58:42.280917 2520 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:58:42.301254 kubelet[2520]: I0130 12:58:42.301134 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:58:42.303047 kubelet[2520]: I0130 12:58:42.303013 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:58:42.303047 kubelet[2520]: I0130 12:58:42.303053 2520 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:58:42.303289 kubelet[2520]: I0130 12:58:42.303088 2520 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:58:42.303289 kubelet[2520]: E0130 12:58:42.303136 2520 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:58:42.335546 kubelet[2520]: I0130 12:58:42.335441 2520 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:58:42.335546 kubelet[2520]: I0130 12:58:42.335465 2520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:58:42.335546 kubelet[2520]: I0130 12:58:42.335488 2520 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:58:42.335716 kubelet[2520]: I0130 12:58:42.335661 2520 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 12:58:42.335716 kubelet[2520]: I0130 12:58:42.335674 2520 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 12:58:42.335716 kubelet[2520]: I0130 12:58:42.335692 2520 policy_none.go:49] "None policy: Start" Jan 30 12:58:42.336709 kubelet[2520]: I0130 12:58:42.336649 2520 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:58:42.336709 kubelet[2520]: I0130 12:58:42.336685 2520 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:58:42.336887 kubelet[2520]: I0130 12:58:42.336828 2520 state_mem.go:75] "Updated machine memory state" Jan 30 12:58:42.342272 kubelet[2520]: I0130 12:58:42.342166 2520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:58:42.342397 kubelet[2520]: I0130 12:58:42.342348 2520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:58:42.342664 kubelet[2520]: I0130 12:58:42.342463 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:58:42.382383 kubelet[2520]: I0130 12:58:42.382340 2520 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:58:42.390033 kubelet[2520]: I0130 12:58:42.389990 2520 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 12:58:42.390343 kubelet[2520]: I0130 12:58:42.390330 2520 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:58:42.403775 kubelet[2520]: I0130 12:58:42.403724 2520 topology_manager.go:215] "Topology Admit Handler" podUID="e7df7801ff95b77a9d9d9bc9928ccdaa" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:58:42.403893 kubelet[2520]: I0130 12:58:42.403845 2520 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:58:42.403893 kubelet[2520]: I0130 12:58:42.403888 2520 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:58:42.578663 kubelet[2520]: I0130 12:58:42.578576 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7df7801ff95b77a9d9d9bc9928ccdaa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7df7801ff95b77a9d9d9bc9928ccdaa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:42.578663 kubelet[2520]: I0130 12:58:42.578645 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:42.578663 kubelet[2520]: I0130 12:58:42.578665 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:42.578663 kubelet[2520]: I0130 12:58:42.578691 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:58:42.578906 kubelet[2520]: I0130 12:58:42.578713 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7df7801ff95b77a9d9d9bc9928ccdaa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7df7801ff95b77a9d9d9bc9928ccdaa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:42.578906 kubelet[2520]: I0130 12:58:42.578728 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:42.578906 kubelet[2520]: I0130 12:58:42.578742 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:42.578906 kubelet[2520]: I0130 12:58:42.578758 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:58:42.578906 kubelet[2520]: I0130 12:58:42.578776 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7df7801ff95b77a9d9d9bc9928ccdaa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e7df7801ff95b77a9d9d9bc9928ccdaa\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:58:42.711145 kubelet[2520]: E0130 12:58:42.711094 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:42.711626 kubelet[2520]: E0130 12:58:42.711585 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:42.711857 kubelet[2520]: E0130 12:58:42.711837 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:42.789607 sudo[2554]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 12:58:42.789927 sudo[2554]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 12:58:43.243451 sudo[2554]: pam_unix(sudo:session): session closed for user root Jan 30 12:58:43.271782 kubelet[2520]: I0130 12:58:43.271729 2520 apiserver.go:52] "Watching apiserver" Jan 30 12:58:43.279061 kubelet[2520]: I0130 12:58:43.279000 2520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:58:43.328098 kubelet[2520]: E0130 12:58:43.326346 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:43.331104 kubelet[2520]: E0130 12:58:43.328815 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:43.335037 kubelet[2520]: E0130 12:58:43.333222 2520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 12:58:43.335037 kubelet[2520]: E0130 12:58:43.333519 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:43.351931 kubelet[2520]: I0130 12:58:43.351692 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.351670961 podStartE2EDuration="1.351670961s" podCreationTimestamp="2025-01-30 12:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:58:43.343113054 +0000 UTC m=+1.146238412" watchObservedRunningTime="2025-01-30 12:58:43.351670961 +0000 UTC m=+1.154796319" Jan 30 12:58:43.351931 kubelet[2520]: I0130 12:58:43.351804 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.351799433 podStartE2EDuration="1.351799433s" podCreationTimestamp="2025-01-30 12:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:58:43.351790871 +0000 UTC m=+1.154916229" watchObservedRunningTime="2025-01-30 12:58:43.351799433 +0000 UTC m=+1.154924751" Jan 30 12:58:43.360720 kubelet[2520]: I0130 12:58:43.360554 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3605165399999999 podStartE2EDuration="1.36051654s" podCreationTimestamp="2025-01-30 12:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:58:43.360236351 +0000 UTC m=+1.163361709" watchObservedRunningTime="2025-01-30 12:58:43.36051654 +0000 UTC m=+1.163641938" Jan 30 12:58:44.327953 kubelet[2520]: E0130 12:58:44.327204 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:44.327953 kubelet[2520]: E0130 12:58:44.327431 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:44.703336 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 30 12:58:44.705050 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:44.708844 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:33802.service: Deactivated successfully. Jan 30 12:58:44.712891 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:58:44.713109 systemd[1]: session-7.scope: Consumed 5.917s CPU time, 188.9M memory peak, 0B memory swap peak. Jan 30 12:58:44.713806 systemd-logind[1410]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:58:44.715428 systemd-logind[1410]: Removed session 7. Jan 30 12:58:45.328559 kubelet[2520]: E0130 12:58:45.328532 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:45.332150 kubelet[2520]: E0130 12:58:45.332089 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:52.245834 kubelet[2520]: E0130 12:58:52.245788 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:52.349446 kubelet[2520]: E0130 12:58:52.349383 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:53.267555 kubelet[2520]: E0130 12:58:53.267316 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:55.347329 kubelet[2520]: E0130 12:58:55.347273 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:56.470988 kubelet[2520]: I0130 12:58:56.470911 2520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 12:58:56.471476 containerd[1431]: time="2025-01-30T12:58:56.471323206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:58:56.472042 kubelet[2520]: I0130 12:58:56.471548 2520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 12:58:57.363161 kubelet[2520]: I0130 12:58:57.363011 2520 topology_manager.go:215] "Topology Admit Handler" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" podNamespace="kube-system" podName="cilium-z29nb" Jan 30 12:58:57.366908 kubelet[2520]: I0130 12:58:57.366721 2520 topology_manager.go:215] "Topology Admit Handler" podUID="7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a" podNamespace="kube-system" podName="kube-proxy-zxnn4" Jan 30 12:58:57.383059 systemd[1]: Created slice kubepods-burstable-podc75a59fe_ab84_4816_aafc_90fc0848b961.slice - libcontainer container kubepods-burstable-podc75a59fe_ab84_4816_aafc_90fc0848b961.slice. Jan 30 12:58:57.394845 systemd[1]: Created slice kubepods-besteffort-pod7c6a8a9a_5f4b_4a55_b361_3c4d05b95a4a.slice - libcontainer container kubepods-besteffort-pod7c6a8a9a_5f4b_4a55_b361_3c4d05b95a4a.slice. Jan 30 12:58:57.399656 kubelet[2520]: I0130 12:58:57.399617 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-cgroup\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399656 kubelet[2520]: I0130 12:58:57.399658 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-bpf-maps\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399800 kubelet[2520]: I0130 12:58:57.399676 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-lib-modules\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399800 kubelet[2520]: I0130 12:58:57.399719 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-hubble-tls\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399800 kubelet[2520]: I0130 12:58:57.399797 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qrd\" (UniqueName: \"kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-kube-api-access-q4qrd\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399876 kubelet[2520]: I0130 12:58:57.399833 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cni-path\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399876 kubelet[2520]: I0130 12:58:57.399851 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wmf\" (UniqueName: \"kubernetes.io/projected/7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a-kube-api-access-r8wmf\") pod \"kube-proxy-zxnn4\" (UID: \"7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a\") " pod="kube-system/kube-proxy-zxnn4" Jan 30 12:58:57.399876 kubelet[2520]: I0130 12:58:57.399870 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a-lib-modules\") pod \"kube-proxy-zxnn4\" (UID: \"7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a\") " pod="kube-system/kube-proxy-zxnn4" Jan 30 12:58:57.399941 kubelet[2520]: I0130 12:58:57.399897 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-run\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399941 kubelet[2520]: I0130 12:58:57.399914 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c75a59fe-ab84-4816-aafc-90fc0848b961-clustermesh-secrets\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.399941 kubelet[2520]: I0130 12:58:57.399928 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-net\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.400004 kubelet[2520]: I0130 12:58:57.399959 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-config-path\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.400004 kubelet[2520]: I0130 12:58:57.399984 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-kernel\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.400004 kubelet[2520]: I0130 12:58:57.399999 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a-xtables-lock\") pod \"kube-proxy-zxnn4\" (UID: \"7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a\") " pod="kube-system/kube-proxy-zxnn4" Jan 30 12:58:57.400094 kubelet[2520]: I0130 12:58:57.400012 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-hostproc\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.400094 kubelet[2520]: I0130 12:58:57.400026 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-xtables-lock\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.400094 kubelet[2520]: I0130 12:58:57.400045 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-etc-cni-netd\") pod \"cilium-z29nb\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " pod="kube-system/cilium-z29nb" Jan 30 12:58:57.400094 kubelet[2520]: I0130 12:58:57.400059 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a-kube-proxy\") pod \"kube-proxy-zxnn4\" (UID: \"7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a\") " pod="kube-system/kube-proxy-zxnn4" Jan 30 12:58:57.497793 kubelet[2520]: I0130 12:58:57.497688 2520 topology_manager.go:215] "Topology Admit Handler" podUID="5d024604-d426-488e-9afc-7400f94be40e" podNamespace="kube-system" podName="cilium-operator-599987898-w6glr" Jan 30 12:58:57.520174 systemd[1]: Created slice kubepods-besteffort-pod5d024604_d426_488e_9afc_7400f94be40e.slice - libcontainer container kubepods-besteffort-pod5d024604_d426_488e_9afc_7400f94be40e.slice. Jan 30 12:58:57.601152 kubelet[2520]: I0130 12:58:57.601114 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6l2\" (UniqueName: \"kubernetes.io/projected/5d024604-d426-488e-9afc-7400f94be40e-kube-api-access-cx6l2\") pod \"cilium-operator-599987898-w6glr\" (UID: \"5d024604-d426-488e-9afc-7400f94be40e\") " pod="kube-system/cilium-operator-599987898-w6glr" Jan 30 12:58:57.601152 kubelet[2520]: I0130 12:58:57.601152 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d024604-d426-488e-9afc-7400f94be40e-cilium-config-path\") pod \"cilium-operator-599987898-w6glr\" (UID: \"5d024604-d426-488e-9afc-7400f94be40e\") " pod="kube-system/cilium-operator-599987898-w6glr" Jan 30 12:58:57.689933 kubelet[2520]: E0130 12:58:57.689790 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:57.690971 containerd[1431]: time="2025-01-30T12:58:57.690425228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z29nb,Uid:c75a59fe-ab84-4816-aafc-90fc0848b961,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:57.709193 kubelet[2520]: E0130 12:58:57.708713 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:57.710196 containerd[1431]: time="2025-01-30T12:58:57.710150103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxnn4,Uid:7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:57.731663 containerd[1431]: time="2025-01-30T12:58:57.731557804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:57.731663 containerd[1431]: time="2025-01-30T12:58:57.731626975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:57.731663 containerd[1431]: time="2025-01-30T12:58:57.731646858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:57.731899 containerd[1431]: time="2025-01-30T12:58:57.731726190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:57.743778 containerd[1431]: time="2025-01-30T12:58:57.742855308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:57.743778 containerd[1431]: time="2025-01-30T12:58:57.742918878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:57.743778 containerd[1431]: time="2025-01-30T12:58:57.742945922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:57.743778 containerd[1431]: time="2025-01-30T12:58:57.743024094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:57.751251 systemd[1]: Started cri-containerd-7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909.scope - libcontainer container 7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909. Jan 30 12:58:57.756791 systemd[1]: Started cri-containerd-389816d7595a6dc316a77aaedf7a6f8aee5d9b95cdabb9156c6b6fbbd9e8988a.scope - libcontainer container 389816d7595a6dc316a77aaedf7a6f8aee5d9b95cdabb9156c6b6fbbd9e8988a. Jan 30 12:58:57.776742 containerd[1431]: time="2025-01-30T12:58:57.776705093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z29nb,Uid:c75a59fe-ab84-4816-aafc-90fc0848b961,Namespace:kube-system,Attempt:0,} returns sandbox id \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\"" Jan 30 12:58:57.778097 kubelet[2520]: E0130 12:58:57.777728 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:57.779181 containerd[1431]: time="2025-01-30T12:58:57.779149159Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:58:57.784411 containerd[1431]: time="2025-01-30T12:58:57.783665393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxnn4,Uid:7c6a8a9a-5f4b-4a55-b361-3c4d05b95a4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"389816d7595a6dc316a77aaedf7a6f8aee5d9b95cdabb9156c6b6fbbd9e8988a\"" Jan 30 12:58:57.784495 kubelet[2520]: E0130 12:58:57.784368 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:57.786791 containerd[1431]: time="2025-01-30T12:58:57.786736718Z" level=info msg="CreateContainer within sandbox \"389816d7595a6dc316a77aaedf7a6f8aee5d9b95cdabb9156c6b6fbbd9e8988a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:58:57.799536 containerd[1431]: time="2025-01-30T12:58:57.799405518Z" level=info msg="CreateContainer within sandbox \"389816d7595a6dc316a77aaedf7a6f8aee5d9b95cdabb9156c6b6fbbd9e8988a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4398018d4d6bf7be3066bfc77080fe28b7ef7bc90352ecce7af0028819b063e7\"" Jan 30 12:58:57.800009 containerd[1431]: time="2025-01-30T12:58:57.799974728Z" level=info msg="StartContainer for \"4398018d4d6bf7be3066bfc77080fe28b7ef7bc90352ecce7af0028819b063e7\"" Jan 30 12:58:57.824542 kubelet[2520]: E0130 12:58:57.824503 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:57.825435 containerd[1431]: time="2025-01-30T12:58:57.825109538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-w6glr,Uid:5d024604-d426-488e-9afc-7400f94be40e,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:57.825251 systemd[1]: Started cri-containerd-4398018d4d6bf7be3066bfc77080fe28b7ef7bc90352ecce7af0028819b063e7.scope - libcontainer container 4398018d4d6bf7be3066bfc77080fe28b7ef7bc90352ecce7af0028819b063e7. Jan 30 12:58:57.858110 containerd[1431]: time="2025-01-30T12:58:57.854271943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:57.858110 containerd[1431]: time="2025-01-30T12:58:57.855351433Z" level=info msg="StartContainer for \"4398018d4d6bf7be3066bfc77080fe28b7ef7bc90352ecce7af0028819b063e7\" returns successfully" Jan 30 12:58:57.858110 containerd[1431]: time="2025-01-30T12:58:57.854727655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:57.858110 containerd[1431]: time="2025-01-30T12:58:57.855621676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:57.858110 containerd[1431]: time="2025-01-30T12:58:57.855724052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:57.881267 systemd[1]: Started cri-containerd-60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18.scope - libcontainer container 60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18. Jan 30 12:58:57.910406 containerd[1431]: time="2025-01-30T12:58:57.910253584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-w6glr,Uid:5d024604-d426-488e-9afc-7400f94be40e,Namespace:kube-system,Attempt:0,} returns sandbox id \"60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18\"" Jan 30 12:58:57.912022 kubelet[2520]: E0130 12:58:57.911537 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:58:58.022945 update_engine[1421]: I20250130 12:58:58.022579 1421 update_attempter.cc:509] Updating boot flags... Jan 30 12:58:58.046115 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2795) Jan 30 12:58:58.072108 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2799) Jan 30 12:58:58.120652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2799) Jan 30 12:58:58.373021 kubelet[2520]: E0130 12:58:58.372978 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:01.660451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666929032.mount: Deactivated successfully. Jan 30 12:59:04.206365 containerd[1431]: time="2025-01-30T12:59:04.206305742Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:59:04.209615 containerd[1431]: time="2025-01-30T12:59:04.209546112Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 12:59:04.210533 containerd[1431]: time="2025-01-30T12:59:04.210487351Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:59:04.212801 containerd[1431]: time="2025-01-30T12:59:04.212493085Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.433301439s" Jan 30 12:59:04.212801 containerd[1431]: time="2025-01-30T12:59:04.212538531Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 12:59:04.215580 containerd[1431]: time="2025-01-30T12:59:04.215535470Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:59:04.217731 containerd[1431]: time="2025-01-30T12:59:04.216707018Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:59:04.283362 containerd[1431]: time="2025-01-30T12:59:04.283302279Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\"" Jan 30 12:59:04.284913 containerd[1431]: time="2025-01-30T12:59:04.284881599Z" level=info msg="StartContainer for \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\"" Jan 30 12:59:04.313102 systemd[1]: run-containerd-runc-k8s.io-26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825-runc.KSHPT4.mount: Deactivated successfully. Jan 30 12:59:04.324320 systemd[1]: Started cri-containerd-26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825.scope - libcontainer container 26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825. Jan 30 12:59:04.370603 containerd[1431]: time="2025-01-30T12:59:04.370419055Z" level=info msg="StartContainer for \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\" returns successfully" Jan 30 12:59:04.404965 kubelet[2520]: E0130 12:59:04.404855 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:04.433922 kubelet[2520]: I0130 12:59:04.431499 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zxnn4" podStartSLOduration=7.431483217 podStartE2EDuration="7.431483217s" podCreationTimestamp="2025-01-30 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:58:58.383437343 +0000 UTC m=+16.186562741" watchObservedRunningTime="2025-01-30 12:59:04.431483217 +0000 UTC m=+22.234608615" Jan 30 12:59:04.480179 systemd[1]: cri-containerd-26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825.scope: Deactivated successfully. Jan 30 12:59:04.628035 containerd[1431]: time="2025-01-30T12:59:04.627745556Z" level=info msg="shim disconnected" id=26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825 namespace=k8s.io Jan 30 12:59:04.628035 containerd[1431]: time="2025-01-30T12:59:04.627809244Z" level=warning msg="cleaning up after shim disconnected" id=26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825 namespace=k8s.io Jan 30 12:59:04.628035 containerd[1431]: time="2025-01-30T12:59:04.627820485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:05.275826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825-rootfs.mount: Deactivated successfully. Jan 30 12:59:05.407151 kubelet[2520]: E0130 12:59:05.407118 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:05.410090 containerd[1431]: time="2025-01-30T12:59:05.410020743Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:59:05.423432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837287730.mount: Deactivated successfully. Jan 30 12:59:05.431570 containerd[1431]: time="2025-01-30T12:59:05.431521617Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\"" Jan 30 12:59:05.432450 containerd[1431]: time="2025-01-30T12:59:05.432419487Z" level=info msg="StartContainer for \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\"" Jan 30 12:59:05.476303 systemd[1]: Started cri-containerd-56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb.scope - libcontainer container 56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb. Jan 30 12:59:05.509401 containerd[1431]: time="2025-01-30T12:59:05.509351592Z" level=info msg="StartContainer for \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\" returns successfully" Jan 30 12:59:05.515639 containerd[1431]: time="2025-01-30T12:59:05.515255755Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:59:05.516893 containerd[1431]: time="2025-01-30T12:59:05.516176148Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 12:59:05.518439 containerd[1431]: time="2025-01-30T12:59:05.518401580Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:59:05.520144 containerd[1431]: time="2025-01-30T12:59:05.520091747Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.304516593s" Jan 30 12:59:05.520308 containerd[1431]: time="2025-01-30T12:59:05.520288332Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 12:59:05.540579 containerd[1431]: time="2025-01-30T12:59:05.540438320Z" level=info msg="CreateContainer within sandbox \"60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:59:05.560015 containerd[1431]: time="2025-01-30T12:59:05.559951871Z" level=info msg="CreateContainer within sandbox \"60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\"" Jan 30 12:59:05.562133 containerd[1431]: time="2025-01-30T12:59:05.562039526Z" level=info msg="StartContainer for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\"" Jan 30 12:59:05.572330 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:59:05.572823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:59:05.572924 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:59:05.580035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:59:05.585281 systemd[1]: cri-containerd-56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb.scope: Deactivated successfully. Jan 30 12:59:05.599506 systemd[1]: Started cri-containerd-c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412.scope - libcontainer container c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412. Jan 30 12:59:05.625343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:59:05.680179 containerd[1431]: time="2025-01-30T12:59:05.678760625Z" level=info msg="StartContainer for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" returns successfully" Jan 30 12:59:05.680497 containerd[1431]: time="2025-01-30T12:59:05.680322736Z" level=info msg="shim disconnected" id=56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb namespace=k8s.io Jan 30 12:59:05.680497 containerd[1431]: time="2025-01-30T12:59:05.680374263Z" level=warning msg="cleaning up after shim disconnected" id=56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb namespace=k8s.io Jan 30 12:59:05.680497 containerd[1431]: time="2025-01-30T12:59:05.680384024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:06.277838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb-rootfs.mount: Deactivated successfully. Jan 30 12:59:06.417808 kubelet[2520]: E0130 12:59:06.417761 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:06.426526 kubelet[2520]: E0130 12:59:06.422532 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:06.426655 containerd[1431]: time="2025-01-30T12:59:06.423405512Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:59:06.468880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889764939.mount: Deactivated successfully. Jan 30 12:59:06.478706 containerd[1431]: time="2025-01-30T12:59:06.478645268Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\"" Jan 30 12:59:06.479539 containerd[1431]: time="2025-01-30T12:59:06.479501490Z" level=info msg="StartContainer for \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\"" Jan 30 12:59:06.519293 systemd[1]: Started cri-containerd-49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627.scope - libcontainer container 49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627. Jan 30 12:59:06.565914 containerd[1431]: time="2025-01-30T12:59:06.565716482Z" level=info msg="StartContainer for \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\" returns successfully" Jan 30 12:59:06.600349 systemd[1]: cri-containerd-49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627.scope: Deactivated successfully. Jan 30 12:59:06.632127 containerd[1431]: time="2025-01-30T12:59:06.632030671Z" level=info msg="shim disconnected" id=49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627 namespace=k8s.io Jan 30 12:59:06.632127 containerd[1431]: time="2025-01-30T12:59:06.632111801Z" level=warning msg="cleaning up after shim disconnected" id=49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627 namespace=k8s.io Jan 30 12:59:06.632127 containerd[1431]: time="2025-01-30T12:59:06.632120802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:07.297511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627-rootfs.mount: Deactivated successfully. Jan 30 12:59:07.432183 kubelet[2520]: E0130 12:59:07.431706 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:07.432943 kubelet[2520]: E0130 12:59:07.432590 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:07.435427 containerd[1431]: time="2025-01-30T12:59:07.435193738Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:59:07.452816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013408502.mount: Deactivated successfully. Jan 30 12:59:07.455635 kubelet[2520]: I0130 12:59:07.455384 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-w6glr" podStartSLOduration=2.846738802 podStartE2EDuration="10.455369098s" podCreationTimestamp="2025-01-30 12:58:57 +0000 UTC" firstStartedPulling="2025-01-30 12:58:57.91251194 +0000 UTC m=+15.715637258" lastFinishedPulling="2025-01-30 12:59:05.521142196 +0000 UTC m=+23.324267554" observedRunningTime="2025-01-30 12:59:06.448758041 +0000 UTC m=+24.251883399" watchObservedRunningTime="2025-01-30 12:59:07.455369098 +0000 UTC m=+25.258494416" Jan 30 12:59:07.456325 containerd[1431]: time="2025-01-30T12:59:07.456015052Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\"" Jan 30 12:59:07.457016 containerd[1431]: time="2025-01-30T12:59:07.456778940Z" level=info msg="StartContainer for \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\"" Jan 30 12:59:07.492248 systemd[1]: Started cri-containerd-fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032.scope - libcontainer container fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032. Jan 30 12:59:07.510903 systemd[1]: cri-containerd-fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032.scope: Deactivated successfully. Jan 30 12:59:07.512272 containerd[1431]: time="2025-01-30T12:59:07.511962124Z" level=info msg="StartContainer for \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\" returns successfully" Jan 30 12:59:07.533545 containerd[1431]: time="2025-01-30T12:59:07.533475838Z" level=info msg="shim disconnected" id=fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032 namespace=k8s.io Jan 30 12:59:07.533545 containerd[1431]: time="2025-01-30T12:59:07.533532204Z" level=warning msg="cleaning up after shim disconnected" id=fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032 namespace=k8s.io Jan 30 12:59:07.533545 containerd[1431]: time="2025-01-30T12:59:07.533541125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:08.285687 systemd[1]: run-containerd-runc-k8s.io-fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032-runc.iQtbHN.mount: Deactivated successfully. Jan 30 12:59:08.285794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032-rootfs.mount: Deactivated successfully. Jan 30 12:59:08.436442 kubelet[2520]: E0130 12:59:08.436396 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:08.441673 containerd[1431]: time="2025-01-30T12:59:08.441543535Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:59:08.466567 containerd[1431]: time="2025-01-30T12:59:08.466506075Z" level=info msg="CreateContainer within sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\"" Jan 30 12:59:08.467678 containerd[1431]: time="2025-01-30T12:59:08.467314325Z" level=info msg="StartContainer for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\"" Jan 30 12:59:08.506304 systemd[1]: Started cri-containerd-5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7.scope - libcontainer container 5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7. Jan 30 12:59:08.534831 containerd[1431]: time="2025-01-30T12:59:08.533298514Z" level=info msg="StartContainer for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" returns successfully" Jan 30 12:59:08.646121 kubelet[2520]: I0130 12:59:08.646085 2520 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 12:59:08.680636 kubelet[2520]: I0130 12:59:08.680578 2520 topology_manager.go:215] "Topology Admit Handler" podUID="22808391-2643-4fbf-b593-f862f0b9774c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lp8mw" Jan 30 12:59:08.683014 kubelet[2520]: I0130 12:59:08.682383 2520 topology_manager.go:215] "Topology Admit Handler" podUID="6ab88b8d-6cff-420e-87c4-59e051100658" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d5zcl" Jan 30 12:59:08.697204 systemd[1]: Created slice kubepods-burstable-pod22808391_2643_4fbf_b593_f862f0b9774c.slice - libcontainer container kubepods-burstable-pod22808391_2643_4fbf_b593_f862f0b9774c.slice. Jan 30 12:59:08.708589 systemd[1]: Created slice kubepods-burstable-pod6ab88b8d_6cff_420e_87c4_59e051100658.slice - libcontainer container kubepods-burstable-pod6ab88b8d_6cff_420e_87c4_59e051100658.slice. Jan 30 12:59:08.774440 kubelet[2520]: I0130 12:59:08.774397 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ab88b8d-6cff-420e-87c4-59e051100658-config-volume\") pod \"coredns-7db6d8ff4d-d5zcl\" (UID: \"6ab88b8d-6cff-420e-87c4-59e051100658\") " pod="kube-system/coredns-7db6d8ff4d-d5zcl" Jan 30 12:59:08.774440 kubelet[2520]: I0130 12:59:08.774446 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6bg\" (UniqueName: \"kubernetes.io/projected/22808391-2643-4fbf-b593-f862f0b9774c-kube-api-access-xn6bg\") pod \"coredns-7db6d8ff4d-lp8mw\" (UID: \"22808391-2643-4fbf-b593-f862f0b9774c\") " pod="kube-system/coredns-7db6d8ff4d-lp8mw" Jan 30 12:59:08.774602 kubelet[2520]: I0130 12:59:08.774472 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22808391-2643-4fbf-b593-f862f0b9774c-config-volume\") pod \"coredns-7db6d8ff4d-lp8mw\" (UID: \"22808391-2643-4fbf-b593-f862f0b9774c\") " pod="kube-system/coredns-7db6d8ff4d-lp8mw" Jan 30 12:59:08.774602 kubelet[2520]: I0130 12:59:08.774493 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6drk\" (UniqueName: \"kubernetes.io/projected/6ab88b8d-6cff-420e-87c4-59e051100658-kube-api-access-q6drk\") pod \"coredns-7db6d8ff4d-d5zcl\" (UID: \"6ab88b8d-6cff-420e-87c4-59e051100658\") " pod="kube-system/coredns-7db6d8ff4d-d5zcl" Jan 30 12:59:09.003251 kubelet[2520]: E0130 12:59:09.002858 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:09.006103 containerd[1431]: time="2025-01-30T12:59:09.005243623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lp8mw,Uid:22808391-2643-4fbf-b593-f862f0b9774c,Namespace:kube-system,Attempt:0,}" Jan 30 12:59:09.013591 kubelet[2520]: E0130 12:59:09.013553 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:09.015836 containerd[1431]: time="2025-01-30T12:59:09.014287639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5zcl,Uid:6ab88b8d-6cff-420e-87c4-59e051100658,Namespace:kube-system,Attempt:0,}" Jan 30 12:59:09.441448 kubelet[2520]: E0130 12:59:09.441415 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:09.457129 kubelet[2520]: I0130 12:59:09.457036 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z29nb" podStartSLOduration=6.020211305 podStartE2EDuration="12.456998165s" podCreationTimestamp="2025-01-30 12:58:57 +0000 UTC" firstStartedPulling="2025-01-30 12:58:57.778566867 +0000 UTC m=+15.581692185" lastFinishedPulling="2025-01-30 12:59:04.215353687 +0000 UTC m=+22.018479045" observedRunningTime="2025-01-30 12:59:09.456656849 +0000 UTC m=+27.259782207" watchObservedRunningTime="2025-01-30 12:59:09.456998165 +0000 UTC m=+27.260123483" Jan 30 12:59:10.443187 kubelet[2520]: E0130 12:59:10.443132 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:10.815298 systemd-networkd[1355]: cilium_host: Link UP Jan 30 12:59:10.815484 systemd-networkd[1355]: cilium_net: Link UP Jan 30 12:59:10.815487 systemd-networkd[1355]: cilium_net: Gained carrier Jan 30 12:59:10.815636 systemd-networkd[1355]: cilium_host: Gained carrier Jan 30 12:59:10.820395 systemd-networkd[1355]: cilium_host: Gained IPv6LL Jan 30 12:59:10.908225 systemd-networkd[1355]: cilium_vxlan: Link UP Jan 30 12:59:10.908234 systemd-networkd[1355]: cilium_vxlan: Gained carrier Jan 30 12:59:11.142273 systemd-networkd[1355]: cilium_net: Gained IPv6LL Jan 30 12:59:11.365114 kernel: NET: Registered PF_ALG protocol family Jan 30 12:59:11.462212 kubelet[2520]: E0130 12:59:11.460910 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:11.772975 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:47200.service - OpenSSH per-connection server daemon (10.0.0.1:47200). Jan 30 12:59:11.813734 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 47200 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:11.815334 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:11.823823 systemd-logind[1410]: New session 8 of user core. Jan 30 12:59:11.826408 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:59:11.968269 sshd[3578]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:11.972544 systemd-logind[1410]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:59:11.973023 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:47200.service: Deactivated successfully. Jan 30 12:59:11.977417 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:59:11.978337 systemd-logind[1410]: Removed session 8. Jan 30 12:59:12.009299 systemd-networkd[1355]: lxc_health: Link UP Jan 30 12:59:12.020641 systemd-networkd[1355]: lxc_health: Gained carrier Jan 30 12:59:12.185955 systemd-networkd[1355]: lxca1d589cebe0a: Link UP Jan 30 12:59:12.193090 kernel: eth0: renamed from tmpeb444 Jan 30 12:59:12.205019 systemd-networkd[1355]: lxca1d589cebe0a: Gained carrier Jan 30 12:59:12.207237 systemd-networkd[1355]: lxcb82572acd70c: Link UP Jan 30 12:59:12.217086 kernel: eth0: renamed from tmp83f30 Jan 30 12:59:12.226735 systemd-networkd[1355]: lxcb82572acd70c: Gained carrier Jan 30 12:59:12.573241 systemd-networkd[1355]: cilium_vxlan: Gained IPv6LL Jan 30 12:59:13.213254 systemd-networkd[1355]: lxc_health: Gained IPv6LL Jan 30 12:59:13.597258 systemd-networkd[1355]: lxcb82572acd70c: Gained IPv6LL Jan 30 12:59:13.704839 kubelet[2520]: E0130 12:59:13.704588 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:13.918198 systemd-networkd[1355]: lxca1d589cebe0a: Gained IPv6LL Jan 30 12:59:15.818672 containerd[1431]: time="2025-01-30T12:59:15.818558940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:59:15.818672 containerd[1431]: time="2025-01-30T12:59:15.818622706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:59:15.818672 containerd[1431]: time="2025-01-30T12:59:15.818633867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:59:15.819254 containerd[1431]: time="2025-01-30T12:59:15.818719675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:59:15.820531 containerd[1431]: time="2025-01-30T12:59:15.820326778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:59:15.820531 containerd[1431]: time="2025-01-30T12:59:15.820490832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:59:15.820531 containerd[1431]: time="2025-01-30T12:59:15.820522235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:59:15.821527 containerd[1431]: time="2025-01-30T12:59:15.820641846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:59:15.844293 systemd[1]: Started cri-containerd-83f309268fa19a8f4545a92a9fdfb8785f4f7ec6138e3061bfa4caec7b1b6f4c.scope - libcontainer container 83f309268fa19a8f4545a92a9fdfb8785f4f7ec6138e3061bfa4caec7b1b6f4c. Jan 30 12:59:15.849246 systemd[1]: Started cri-containerd-eb444f095d883e2e60ab622b138872df39dfecc99bc1e97679bc8f1c3699df39.scope - libcontainer container eb444f095d883e2e60ab622b138872df39dfecc99bc1e97679bc8f1c3699df39. Jan 30 12:59:15.857631 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:59:15.860974 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:59:15.881239 containerd[1431]: time="2025-01-30T12:59:15.881196286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lp8mw,Uid:22808391-2643-4fbf-b593-f862f0b9774c,Namespace:kube-system,Attempt:0,} returns sandbox id \"83f309268fa19a8f4545a92a9fdfb8785f4f7ec6138e3061bfa4caec7b1b6f4c\"" Jan 30 12:59:15.881368 containerd[1431]: time="2025-01-30T12:59:15.881267173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d5zcl,Uid:6ab88b8d-6cff-420e-87c4-59e051100658,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb444f095d883e2e60ab622b138872df39dfecc99bc1e97679bc8f1c3699df39\"" Jan 30 12:59:15.885775 kubelet[2520]: E0130 12:59:15.882588 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:15.885775 kubelet[2520]: E0130 12:59:15.883536 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:15.887443 containerd[1431]: time="2025-01-30T12:59:15.887401640Z" level=info msg="CreateContainer within sandbox \"83f309268fa19a8f4545a92a9fdfb8785f4f7ec6138e3061bfa4caec7b1b6f4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:59:15.887993 containerd[1431]: time="2025-01-30T12:59:15.887930647Z" level=info msg="CreateContainer within sandbox \"eb444f095d883e2e60ab622b138872df39dfecc99bc1e97679bc8f1c3699df39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:59:15.906694 containerd[1431]: time="2025-01-30T12:59:15.906644076Z" level=info msg="CreateContainer within sandbox \"83f309268fa19a8f4545a92a9fdfb8785f4f7ec6138e3061bfa4caec7b1b6f4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00c4dc95807345d56cb5535c6583059c65c8d6779df51ff601f07f9e7eafa81b\"" Jan 30 12:59:15.907215 containerd[1431]: time="2025-01-30T12:59:15.907190125Z" level=info msg="StartContainer for \"00c4dc95807345d56cb5535c6583059c65c8d6779df51ff601f07f9e7eafa81b\"" Jan 30 12:59:15.911105 containerd[1431]: time="2025-01-30T12:59:15.910996544Z" level=info msg="CreateContainer within sandbox \"eb444f095d883e2e60ab622b138872df39dfecc99bc1e97679bc8f1c3699df39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fc3e90414b38da72b19dbf877431af1ae069acdbc6668c4c4c62663329c283b\"" Jan 30 12:59:15.913299 containerd[1431]: time="2025-01-30T12:59:15.913263666Z" level=info msg="StartContainer for \"0fc3e90414b38da72b19dbf877431af1ae069acdbc6668c4c4c62663329c283b\"" Jan 30 12:59:15.934267 systemd[1]: Started cri-containerd-00c4dc95807345d56cb5535c6583059c65c8d6779df51ff601f07f9e7eafa81b.scope - libcontainer container 00c4dc95807345d56cb5535c6583059c65c8d6779df51ff601f07f9e7eafa81b. Jan 30 12:59:15.937379 systemd[1]: Started cri-containerd-0fc3e90414b38da72b19dbf877431af1ae069acdbc6668c4c4c62663329c283b.scope - libcontainer container 0fc3e90414b38da72b19dbf877431af1ae069acdbc6668c4c4c62663329c283b. Jan 30 12:59:15.964554 containerd[1431]: time="2025-01-30T12:59:15.964508876Z" level=info msg="StartContainer for \"00c4dc95807345d56cb5535c6583059c65c8d6779df51ff601f07f9e7eafa81b\" returns successfully" Jan 30 12:59:15.964678 containerd[1431]: time="2025-01-30T12:59:15.964596524Z" level=info msg="StartContainer for \"0fc3e90414b38da72b19dbf877431af1ae069acdbc6668c4c4c62663329c283b\" returns successfully" Jan 30 12:59:16.472135 kubelet[2520]: E0130 12:59:16.472102 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:16.474418 kubelet[2520]: E0130 12:59:16.474392 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:16.485261 kubelet[2520]: I0130 12:59:16.485206 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d5zcl" podStartSLOduration=19.485191124 podStartE2EDuration="19.485191124s" podCreationTimestamp="2025-01-30 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:59:16.484882297 +0000 UTC m=+34.288007655" watchObservedRunningTime="2025-01-30 12:59:16.485191124 +0000 UTC m=+34.288316482" Jan 30 12:59:16.510002 kubelet[2520]: I0130 12:59:16.509763 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lp8mw" podStartSLOduration=19.509744045 podStartE2EDuration="19.509744045s" podCreationTimestamp="2025-01-30 12:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:59:16.50748389 +0000 UTC m=+34.310609248" watchObservedRunningTime="2025-01-30 12:59:16.509744045 +0000 UTC m=+34.312869363" Jan 30 12:59:16.824154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462057816.mount: Deactivated successfully. Jan 30 12:59:16.980962 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:34860.service - OpenSSH per-connection server daemon (10.0.0.1:34860). Jan 30 12:59:17.019430 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 34860 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:17.020762 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:17.024941 systemd-logind[1410]: New session 9 of user core. Jan 30 12:59:17.030272 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:59:17.151345 sshd[3937]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:17.154302 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:59:17.155091 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:34860.service: Deactivated successfully. Jan 30 12:59:17.157756 systemd-logind[1410]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:59:17.158595 systemd-logind[1410]: Removed session 9. Jan 30 12:59:17.273887 kubelet[2520]: I0130 12:59:17.273639 2520 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:59:17.274436 kubelet[2520]: E0130 12:59:17.274416 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:17.476656 kubelet[2520]: E0130 12:59:17.476544 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:17.477761 kubelet[2520]: E0130 12:59:17.477045 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:17.477761 kubelet[2520]: E0130 12:59:17.477553 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:18.478550 kubelet[2520]: E0130 12:59:18.478490 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:59:22.163856 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Jan 30 12:59:22.204864 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:22.206582 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:22.213697 systemd-logind[1410]: New session 10 of user core. Jan 30 12:59:22.221298 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 12:59:22.364518 sshd[3953]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:22.369175 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:34864.service: Deactivated successfully. Jan 30 12:59:22.371518 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 12:59:22.372508 systemd-logind[1410]: Session 10 logged out. Waiting for processes to exit. Jan 30 12:59:22.374554 systemd-logind[1410]: Removed session 10. Jan 30 12:59:27.378243 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:33908.service - OpenSSH per-connection server daemon (10.0.0.1:33908). Jan 30 12:59:27.450871 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 33908 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:27.452233 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:27.459745 systemd-logind[1410]: New session 11 of user core. Jan 30 12:59:27.478294 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 12:59:27.596284 sshd[3970]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:27.604841 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:33908.service: Deactivated successfully. Jan 30 12:59:27.608768 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 12:59:27.611105 systemd-logind[1410]: Session 11 logged out. Waiting for processes to exit. Jan 30 12:59:27.617391 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:33922.service - OpenSSH per-connection server daemon (10.0.0.1:33922). Jan 30 12:59:27.619473 systemd-logind[1410]: Removed session 11. Jan 30 12:59:27.650690 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 33922 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:27.652109 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:27.658505 systemd-logind[1410]: New session 12 of user core. Jan 30 12:59:27.668270 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 12:59:27.849498 sshd[3985]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:27.862710 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:33922.service: Deactivated successfully. Jan 30 12:59:27.864735 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 12:59:27.866664 systemd-logind[1410]: Session 12 logged out. Waiting for processes to exit. Jan 30 12:59:27.869236 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:33934.service - OpenSSH per-connection server daemon (10.0.0.1:33934). Jan 30 12:59:27.873147 systemd-logind[1410]: Removed session 12. Jan 30 12:59:27.931138 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 33934 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:27.933273 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:27.938141 systemd-logind[1410]: New session 13 of user core. Jan 30 12:59:27.947295 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 12:59:28.074845 sshd[3998]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:28.078767 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:33934.service: Deactivated successfully. Jan 30 12:59:28.080879 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 12:59:28.081902 systemd-logind[1410]: Session 13 logged out. Waiting for processes to exit. Jan 30 12:59:28.083928 systemd-logind[1410]: Removed session 13. Jan 30 12:59:33.086952 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:48906.service - OpenSSH per-connection server daemon (10.0.0.1:48906). Jan 30 12:59:33.122060 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 48906 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:33.123678 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:33.128807 systemd-logind[1410]: New session 14 of user core. Jan 30 12:59:33.138284 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 12:59:33.257177 sshd[4014]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:33.261442 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:48906.service: Deactivated successfully. Jan 30 12:59:33.263202 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 12:59:33.263872 systemd-logind[1410]: Session 14 logged out. Waiting for processes to exit. Jan 30 12:59:33.264807 systemd-logind[1410]: Removed session 14. Jan 30 12:59:38.271282 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:48916.service - OpenSSH per-connection server daemon (10.0.0.1:48916). Jan 30 12:59:38.314044 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 48916 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:38.315576 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:38.327830 systemd-logind[1410]: New session 15 of user core. Jan 30 12:59:38.335270 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 12:59:38.461380 sshd[4029]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:38.479158 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:48916.service: Deactivated successfully. Jan 30 12:59:38.483305 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 12:59:38.489028 systemd-logind[1410]: Session 15 logged out. Waiting for processes to exit. Jan 30 12:59:38.498488 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:48926.service - OpenSSH per-connection server daemon (10.0.0.1:48926). Jan 30 12:59:38.500118 systemd-logind[1410]: Removed session 15. Jan 30 12:59:38.534888 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 48926 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:38.536658 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:38.540932 systemd-logind[1410]: New session 16 of user core. Jan 30 12:59:38.548272 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 12:59:38.818029 sshd[4043]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:38.834488 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:48926.service: Deactivated successfully. Jan 30 12:59:38.837358 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 12:59:38.838982 systemd-logind[1410]: Session 16 logged out. Waiting for processes to exit. Jan 30 12:59:38.852424 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:48942.service - OpenSSH per-connection server daemon (10.0.0.1:48942). Jan 30 12:59:38.853332 systemd-logind[1410]: Removed session 16. Jan 30 12:59:38.890342 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 48942 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:38.891876 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:38.895984 systemd-logind[1410]: New session 17 of user core. Jan 30 12:59:38.903246 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 12:59:40.233422 sshd[4056]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:40.244035 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:48942.service: Deactivated successfully. Jan 30 12:59:40.245922 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 12:59:40.248974 systemd-logind[1410]: Session 17 logged out. Waiting for processes to exit. Jan 30 12:59:40.259010 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:48944.service - OpenSSH per-connection server daemon (10.0.0.1:48944). Jan 30 12:59:40.260156 systemd-logind[1410]: Removed session 17. Jan 30 12:59:40.298562 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 48944 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:40.300144 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:40.304939 systemd-logind[1410]: New session 18 of user core. Jan 30 12:59:40.312322 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 12:59:40.553755 sshd[4079]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:40.567642 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:48944.service: Deactivated successfully. Jan 30 12:59:40.570586 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 12:59:40.572224 systemd-logind[1410]: Session 18 logged out. Waiting for processes to exit. Jan 30 12:59:40.574016 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:48950.service - OpenSSH per-connection server daemon (10.0.0.1:48950). Jan 30 12:59:40.574889 systemd-logind[1410]: Removed session 18. Jan 30 12:59:40.613342 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 48950 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:40.614773 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:40.619092 systemd-logind[1410]: New session 19 of user core. Jan 30 12:59:40.630274 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 12:59:40.749693 sshd[4091]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:40.753779 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:48950.service: Deactivated successfully. Jan 30 12:59:40.755598 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 12:59:40.756304 systemd-logind[1410]: Session 19 logged out. Waiting for processes to exit. Jan 30 12:59:40.757180 systemd-logind[1410]: Removed session 19. Jan 30 12:59:45.760932 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:37584.service - OpenSSH per-connection server daemon (10.0.0.1:37584). Jan 30 12:59:45.796206 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 37584 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:45.797737 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:45.803095 systemd-logind[1410]: New session 20 of user core. Jan 30 12:59:45.819361 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 12:59:45.953326 sshd[4111]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:45.956156 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:37584.service: Deactivated successfully. Jan 30 12:59:45.958756 systemd-logind[1410]: Session 20 logged out. Waiting for processes to exit. Jan 30 12:59:45.958944 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 12:59:45.960516 systemd-logind[1410]: Removed session 20. Jan 30 12:59:50.964907 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:37598.service - OpenSSH per-connection server daemon (10.0.0.1:37598). Jan 30 12:59:51.003983 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 37598 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:51.005792 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:51.010397 systemd-logind[1410]: New session 21 of user core. Jan 30 12:59:51.020311 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 12:59:51.141543 sshd[4126]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:51.145333 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:37598.service: Deactivated successfully. Jan 30 12:59:51.147053 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 12:59:51.148263 systemd-logind[1410]: Session 21 logged out. Waiting for processes to exit. Jan 30 12:59:51.149200 systemd-logind[1410]: Removed session 21. Jan 30 12:59:56.153012 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:46446.service - OpenSSH per-connection server daemon (10.0.0.1:46446). Jan 30 12:59:56.187888 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 46446 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:56.189374 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:56.194094 systemd-logind[1410]: New session 22 of user core. Jan 30 12:59:56.204266 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 12:59:56.320930 sshd[4140]: pam_unix(sshd:session): session closed for user core Jan 30 12:59:56.329818 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:46446.service: Deactivated successfully. Jan 30 12:59:56.331460 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 12:59:56.334732 systemd-logind[1410]: Session 22 logged out. Waiting for processes to exit. Jan 30 12:59:56.342384 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:46462.service - OpenSSH per-connection server daemon (10.0.0.1:46462). Jan 30 12:59:56.343869 systemd-logind[1410]: Removed session 22. Jan 30 12:59:56.373203 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 46462 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:59:56.374981 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:59:56.378606 systemd-logind[1410]: New session 23 of user core. Jan 30 12:59:56.390262 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 12:59:58.581551 containerd[1431]: time="2025-01-30T12:59:58.581362181Z" level=info msg="StopContainer for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" with timeout 30 (s)" Jan 30 12:59:58.584147 containerd[1431]: time="2025-01-30T12:59:58.583736836Z" level=info msg="Stop container \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" with signal terminated" Jan 30 12:59:58.595475 systemd[1]: cri-containerd-c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412.scope: Deactivated successfully. Jan 30 12:59:58.620938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412-rootfs.mount: Deactivated successfully. Jan 30 12:59:58.626658 containerd[1431]: time="2025-01-30T12:59:58.626583651Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:59:58.627891 containerd[1431]: time="2025-01-30T12:59:58.627862240Z" level=info msg="StopContainer for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" with timeout 2 (s)" Jan 30 12:59:58.628213 containerd[1431]: time="2025-01-30T12:59:58.628192368Z" level=info msg="Stop container \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" with signal terminated" Jan 30 12:59:58.631132 containerd[1431]: time="2025-01-30T12:59:58.630951431Z" level=info msg="shim disconnected" id=c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412 namespace=k8s.io Jan 30 12:59:58.631132 containerd[1431]: time="2025-01-30T12:59:58.631099674Z" level=warning msg="cleaning up after shim disconnected" id=c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412 namespace=k8s.io Jan 30 12:59:58.631132 containerd[1431]: time="2025-01-30T12:59:58.631109994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:58.635822 systemd-networkd[1355]: lxc_health: Link DOWN Jan 30 12:59:58.635834 systemd-networkd[1355]: lxc_health: Lost carrier Jan 30 12:59:58.660493 systemd[1]: cri-containerd-5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7.scope: Deactivated successfully. Jan 30 12:59:58.660757 systemd[1]: cri-containerd-5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7.scope: Consumed 6.828s CPU time. Jan 30 12:59:58.682323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7-rootfs.mount: Deactivated successfully. Jan 30 12:59:58.684987 containerd[1431]: time="2025-01-30T12:59:58.684923260Z" level=info msg="StopContainer for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" returns successfully" Jan 30 12:59:58.686000 containerd[1431]: time="2025-01-30T12:59:58.685945243Z" level=info msg="StopPodSandbox for \"60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18\"" Jan 30 12:59:58.686211 containerd[1431]: time="2025-01-30T12:59:58.685996524Z" level=info msg="Container to stop \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:59:58.687669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18-shm.mount: Deactivated successfully. Jan 30 12:59:58.689571 containerd[1431]: time="2025-01-30T12:59:58.689518044Z" level=info msg="shim disconnected" id=5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7 namespace=k8s.io Jan 30 12:59:58.689571 containerd[1431]: time="2025-01-30T12:59:58.689568486Z" level=warning msg="cleaning up after shim disconnected" id=5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7 namespace=k8s.io Jan 30 12:59:58.689571 containerd[1431]: time="2025-01-30T12:59:58.689576806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:58.695641 systemd[1]: cri-containerd-60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18.scope: Deactivated successfully. Jan 30 12:59:58.710407 containerd[1431]: time="2025-01-30T12:59:58.710362519Z" level=info msg="StopContainer for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" returns successfully" Jan 30 12:59:58.711017 containerd[1431]: time="2025-01-30T12:59:58.710954373Z" level=info msg="StopPodSandbox for \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\"" Jan 30 12:59:58.711017 containerd[1431]: time="2025-01-30T12:59:58.710993134Z" level=info msg="Container to stop \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:59:58.711017 containerd[1431]: time="2025-01-30T12:59:58.711006214Z" level=info msg="Container to stop \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:59:58.711017 containerd[1431]: time="2025-01-30T12:59:58.711016014Z" level=info msg="Container to stop \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:59:58.712025 containerd[1431]: time="2025-01-30T12:59:58.711024854Z" level=info msg="Container to stop \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:59:58.712025 containerd[1431]: time="2025-01-30T12:59:58.711033974Z" level=info msg="Container to stop \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:59:58.713252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909-shm.mount: Deactivated successfully. Jan 30 12:59:58.717467 systemd[1]: cri-containerd-7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909.scope: Deactivated successfully. Jan 30 12:59:58.735786 containerd[1431]: time="2025-01-30T12:59:58.735722617Z" level=info msg="shim disconnected" id=60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18 namespace=k8s.io Jan 30 12:59:58.735786 containerd[1431]: time="2025-01-30T12:59:58.735782298Z" level=warning msg="cleaning up after shim disconnected" id=60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18 namespace=k8s.io Jan 30 12:59:58.735786 containerd[1431]: time="2025-01-30T12:59:58.735790778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:58.746477 containerd[1431]: time="2025-01-30T12:59:58.746285777Z" level=info msg="shim disconnected" id=7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909 namespace=k8s.io Jan 30 12:59:58.746477 containerd[1431]: time="2025-01-30T12:59:58.746352819Z" level=warning msg="cleaning up after shim disconnected" id=7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909 namespace=k8s.io Jan 30 12:59:58.746477 containerd[1431]: time="2025-01-30T12:59:58.746362659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:59:58.758159 containerd[1431]: time="2025-01-30T12:59:58.758052165Z" level=info msg="TearDown network for sandbox \"60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18\" successfully" Jan 30 12:59:58.758159 containerd[1431]: time="2025-01-30T12:59:58.758143967Z" level=info msg="StopPodSandbox for \"60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18\" returns successfully" Jan 30 12:59:58.764248 containerd[1431]: time="2025-01-30T12:59:58.764176625Z" level=warning msg="cleanup warnings time=\"2025-01-30T12:59:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 12:59:58.765213 containerd[1431]: time="2025-01-30T12:59:58.765110726Z" level=info msg="TearDown network for sandbox \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" successfully" Jan 30 12:59:58.765213 containerd[1431]: time="2025-01-30T12:59:58.765141767Z" level=info msg="StopPodSandbox for \"7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909\" returns successfully" Jan 30 12:59:58.854542 kubelet[2520]: I0130 12:59:58.854501 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cni-path\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854542 kubelet[2520]: I0130 12:59:58.854543 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-xtables-lock\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854916 kubelet[2520]: I0130 12:59:58.854563 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-lib-modules\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854916 kubelet[2520]: I0130 12:59:58.854587 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-config-path\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854916 kubelet[2520]: I0130 12:59:58.854608 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qrd\" (UniqueName: \"kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-kube-api-access-q4qrd\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854916 kubelet[2520]: I0130 12:59:58.854625 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c75a59fe-ab84-4816-aafc-90fc0848b961-clustermesh-secrets\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854916 kubelet[2520]: I0130 12:59:58.854639 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-hostproc\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.854916 kubelet[2520]: I0130 12:59:58.854654 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-cgroup\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.855054 kubelet[2520]: I0130 12:59:58.854667 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-run\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.855054 kubelet[2520]: I0130 12:59:58.854683 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-net\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.855054 kubelet[2520]: I0130 12:59:58.854697 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-kernel\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.855054 kubelet[2520]: I0130 12:59:58.854714 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx6l2\" (UniqueName: \"kubernetes.io/projected/5d024604-d426-488e-9afc-7400f94be40e-kube-api-access-cx6l2\") pod \"5d024604-d426-488e-9afc-7400f94be40e\" (UID: \"5d024604-d426-488e-9afc-7400f94be40e\") " Jan 30 12:59:58.855054 kubelet[2520]: I0130 12:59:58.854730 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d024604-d426-488e-9afc-7400f94be40e-cilium-config-path\") pod \"5d024604-d426-488e-9afc-7400f94be40e\" (UID: \"5d024604-d426-488e-9afc-7400f94be40e\") " Jan 30 12:59:58.855054 kubelet[2520]: I0130 12:59:58.854745 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-bpf-maps\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.855196 kubelet[2520]: I0130 12:59:58.854761 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-hubble-tls\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.855196 kubelet[2520]: I0130 12:59:58.854778 2520 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-etc-cni-netd\") pod \"c75a59fe-ab84-4816-aafc-90fc0848b961\" (UID: \"c75a59fe-ab84-4816-aafc-90fc0848b961\") " Jan 30 12:59:58.858725 kubelet[2520]: I0130 12:59:58.858693 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cni-path" (OuterVolumeSpecName: "cni-path") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.858772 kubelet[2520]: I0130 12:59:58.858729 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.858772 kubelet[2520]: I0130 12:59:58.858692 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.858772 kubelet[2520]: I0130 12:59:58.858763 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.859263 kubelet[2520]: I0130 12:59:58.859044 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.859263 kubelet[2520]: I0130 12:59:58.859109 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.863211 kubelet[2520]: I0130 12:59:58.863180 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:59:58.863349 kubelet[2520]: I0130 12:59:58.863333 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-hostproc" (OuterVolumeSpecName: "hostproc") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.863544 kubelet[2520]: I0130 12:59:58.863506 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-kube-api-access-q4qrd" (OuterVolumeSpecName: "kube-api-access-q4qrd") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "kube-api-access-q4qrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:59:58.863591 kubelet[2520]: I0130 12:59:58.863553 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.863591 kubelet[2520]: I0130 12:59:58.863571 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.863591 kubelet[2520]: I0130 12:59:58.863588 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:59:58.865529 kubelet[2520]: I0130 12:59:58.865411 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d024604-d426-488e-9afc-7400f94be40e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d024604-d426-488e-9afc-7400f94be40e" (UID: "5d024604-d426-488e-9afc-7400f94be40e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:59:58.865640 kubelet[2520]: I0130 12:59:58.865431 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:59:58.865701 kubelet[2520]: I0130 12:59:58.865512 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75a59fe-ab84-4816-aafc-90fc0848b961-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c75a59fe-ab84-4816-aafc-90fc0848b961" (UID: "c75a59fe-ab84-4816-aafc-90fc0848b961"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 12:59:58.865766 kubelet[2520]: I0130 12:59:58.865645 2520 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d024604-d426-488e-9afc-7400f94be40e-kube-api-access-cx6l2" (OuterVolumeSpecName: "kube-api-access-cx6l2") pod "5d024604-d426-488e-9afc-7400f94be40e" (UID: "5d024604-d426-488e-9afc-7400f94be40e"). InnerVolumeSpecName "kube-api-access-cx6l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:59:58.955160 kubelet[2520]: I0130 12:59:58.955111 2520 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955160 kubelet[2520]: I0130 12:59:58.955150 2520 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955160 kubelet[2520]: I0130 12:59:58.955162 2520 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955160 kubelet[2520]: I0130 12:59:58.955170 2520 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955160 kubelet[2520]: I0130 12:59:58.955179 2520 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q4qrd\" (UniqueName: \"kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-kube-api-access-q4qrd\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955187 2520 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c75a59fe-ab84-4816-aafc-90fc0848b961-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955197 2520 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955205 2520 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955212 2520 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955219 2520 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955227 2520 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955240 2520 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cx6l2\" (UniqueName: \"kubernetes.io/projected/5d024604-d426-488e-9afc-7400f94be40e-kube-api-access-cx6l2\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955496 kubelet[2520]: I0130 12:59:58.955250 2520 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d024604-d426-488e-9afc-7400f94be40e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955668 kubelet[2520]: I0130 12:59:58.955258 2520 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955668 kubelet[2520]: I0130 12:59:58.955264 2520 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c75a59fe-ab84-4816-aafc-90fc0848b961-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:58.955668 kubelet[2520]: I0130 12:59:58.955272 2520 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c75a59fe-ab84-4816-aafc-90fc0848b961-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 12:59:59.596153 kubelet[2520]: I0130 12:59:59.594216 2520 scope.go:117] "RemoveContainer" containerID="c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412" Jan 30 12:59:59.597866 containerd[1431]: time="2025-01-30T12:59:59.597741983Z" level=info msg="RemoveContainer for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\"" Jan 30 12:59:59.603101 systemd[1]: Removed slice kubepods-besteffort-pod5d024604_d426_488e_9afc_7400f94be40e.slice - libcontainer container kubepods-besteffort-pod5d024604_d426_488e_9afc_7400f94be40e.slice. Jan 30 12:59:59.605388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60040d475683a3e2a96f7066baad28d47ee207367eca37a3c12a6b5c474b1c18-rootfs.mount: Deactivated successfully. Jan 30 12:59:59.605493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7422d106a3211781f9c2ce950d784d35bfd8133ecbc8fe524e99c82d02258909-rootfs.mount: Deactivated successfully. Jan 30 12:59:59.605549 systemd[1]: var-lib-kubelet-pods-5d024604\x2dd426\x2d488e\x2d9afc\x2d7400f94be40e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcx6l2.mount: Deactivated successfully. Jan 30 12:59:59.605605 systemd[1]: var-lib-kubelet-pods-c75a59fe\x2dab84\x2d4816\x2daafc\x2d90fc0848b961-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4qrd.mount: Deactivated successfully. Jan 30 12:59:59.605656 systemd[1]: var-lib-kubelet-pods-c75a59fe\x2dab84\x2d4816\x2daafc\x2d90fc0848b961-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:59:59.605709 systemd[1]: var-lib-kubelet-pods-c75a59fe\x2dab84\x2d4816\x2daafc\x2d90fc0848b961-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:59:59.609271 containerd[1431]: time="2025-01-30T12:59:59.609211556Z" level=info msg="RemoveContainer for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" returns successfully" Jan 30 12:59:59.611395 kubelet[2520]: I0130 12:59:59.611297 2520 scope.go:117] "RemoveContainer" containerID="c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412" Jan 30 12:59:59.611637 systemd[1]: Removed slice kubepods-burstable-podc75a59fe_ab84_4816_aafc_90fc0848b961.slice - libcontainer container kubepods-burstable-podc75a59fe_ab84_4816_aafc_90fc0848b961.slice. Jan 30 12:59:59.611729 systemd[1]: kubepods-burstable-podc75a59fe_ab84_4816_aafc_90fc0848b961.slice: Consumed 7.114s CPU time. Jan 30 12:59:59.612417 containerd[1431]: time="2025-01-30T12:59:59.612310544Z" level=error msg="ContainerStatus for \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\": not found" Jan 30 12:59:59.626150 kubelet[2520]: E0130 12:59:59.625343 2520 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\": not found" containerID="c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412" Jan 30 12:59:59.626589 kubelet[2520]: I0130 12:59:59.625407 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412"} err="failed to get container status \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0c9308c85132762552f92f09be0e8311b162b898f87d66f929f21a859641412\": not found" Jan 30 12:59:59.626589 kubelet[2520]: I0130 12:59:59.626429 2520 scope.go:117] "RemoveContainer" containerID="5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7" Jan 30 12:59:59.629792 containerd[1431]: time="2025-01-30T12:59:59.629748849Z" level=info msg="RemoveContainer for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\"" Jan 30 12:59:59.633395 containerd[1431]: time="2025-01-30T12:59:59.633343008Z" level=info msg="RemoveContainer for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" returns successfully" Jan 30 12:59:59.633651 kubelet[2520]: I0130 12:59:59.633619 2520 scope.go:117] "RemoveContainer" containerID="fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032" Jan 30 12:59:59.634953 containerd[1431]: time="2025-01-30T12:59:59.634874642Z" level=info msg="RemoveContainer for \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\"" Jan 30 12:59:59.641620 containerd[1431]: time="2025-01-30T12:59:59.641574390Z" level=info msg="RemoveContainer for \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\" returns successfully" Jan 30 12:59:59.641882 kubelet[2520]: I0130 12:59:59.641851 2520 scope.go:117] "RemoveContainer" containerID="49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627" Jan 30 12:59:59.643402 containerd[1431]: time="2025-01-30T12:59:59.643336469Z" level=info msg="RemoveContainer for \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\"" Jan 30 12:59:59.646073 containerd[1431]: time="2025-01-30T12:59:59.646023288Z" level=info msg="RemoveContainer for \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\" returns successfully" Jan 30 12:59:59.646302 kubelet[2520]: I0130 12:59:59.646267 2520 scope.go:117] "RemoveContainer" containerID="56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb" Jan 30 12:59:59.647505 containerd[1431]: time="2025-01-30T12:59:59.647475320Z" level=info msg="RemoveContainer for \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\"" Jan 30 12:59:59.650119 containerd[1431]: time="2025-01-30T12:59:59.650057777Z" level=info msg="RemoveContainer for \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\" returns successfully" Jan 30 12:59:59.650309 kubelet[2520]: I0130 12:59:59.650268 2520 scope.go:117] "RemoveContainer" containerID="26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825" Jan 30 12:59:59.651347 containerd[1431]: time="2025-01-30T12:59:59.651324285Z" level=info msg="RemoveContainer for \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\"" Jan 30 12:59:59.653701 containerd[1431]: time="2025-01-30T12:59:59.653667217Z" level=info msg="RemoveContainer for \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\" returns successfully" Jan 30 12:59:59.653947 kubelet[2520]: I0130 12:59:59.653924 2520 scope.go:117] "RemoveContainer" containerID="5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7" Jan 30 12:59:59.654199 containerd[1431]: time="2025-01-30T12:59:59.654160788Z" level=error msg="ContainerStatus for \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\": not found" Jan 30 12:59:59.654369 kubelet[2520]: E0130 12:59:59.654323 2520 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\": not found" containerID="5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7" Jan 30 12:59:59.654369 kubelet[2520]: I0130 12:59:59.654357 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7"} err="failed to get container status \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5095f9e8ef425ddb2796d8c7ebf116e4fc1289b1b6f5339115cf3bd3c790abe7\": not found" Jan 30 12:59:59.654439 kubelet[2520]: I0130 12:59:59.654380 2520 scope.go:117] "RemoveContainer" containerID="fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032" Jan 30 12:59:59.654684 containerd[1431]: time="2025-01-30T12:59:59.654591437Z" level=error msg="ContainerStatus for \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\": not found" Jan 30 12:59:59.654769 kubelet[2520]: E0130 12:59:59.654748 2520 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\": not found" containerID="fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032" Jan 30 12:59:59.654823 kubelet[2520]: I0130 12:59:59.654771 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032"} err="failed to get container status \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa7693c704c54728170be9a0d85aa9465bf405c81ed1f9023b879b4d2bf3a032\": not found" Jan 30 12:59:59.654823 kubelet[2520]: I0130 12:59:59.654788 2520 scope.go:117] "RemoveContainer" containerID="49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627" Jan 30 12:59:59.654975 containerd[1431]: time="2025-01-30T12:59:59.654945005Z" level=error msg="ContainerStatus for \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\": not found" Jan 30 12:59:59.655078 kubelet[2520]: E0130 12:59:59.655053 2520 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\": not found" containerID="49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627" Jan 30 12:59:59.655250 kubelet[2520]: I0130 12:59:59.655112 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627"} err="failed to get container status \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\": rpc error: code = NotFound desc = an error occurred when try to find container \"49fe96df319b939195494e59187a69318abfbdfae9e17f013b32b9ce072c2627\": not found" Jan 30 12:59:59.655250 kubelet[2520]: I0130 12:59:59.655127 2520 scope.go:117] "RemoveContainer" containerID="56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb" Jan 30 12:59:59.655418 containerd[1431]: time="2025-01-30T12:59:59.655317253Z" level=error msg="ContainerStatus for \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\": not found" Jan 30 12:59:59.655447 kubelet[2520]: E0130 12:59:59.655434 2520 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\": not found" containerID="56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb" Jan 30 12:59:59.655478 kubelet[2520]: I0130 12:59:59.655457 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb"} err="failed to get container status \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\": rpc error: code = NotFound desc = an error occurred when try to find container \"56d140c42b3e224311a5e12a68182e9c9d8e6f5a0529dedf4d309f0edaf25beb\": not found" Jan 30 12:59:59.655478 kubelet[2520]: I0130 12:59:59.655472 2520 scope.go:117] "RemoveContainer" containerID="26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825" Jan 30 12:59:59.655717 containerd[1431]: time="2025-01-30T12:59:59.655636940Z" level=error msg="ContainerStatus for \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\": not found" Jan 30 12:59:59.655791 kubelet[2520]: E0130 12:59:59.655770 2520 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\": not found" containerID="26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825" Jan 30 12:59:59.655821 kubelet[2520]: I0130 12:59:59.655795 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825"} err="failed to get container status \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\": rpc error: code = NotFound desc = an error occurred when try to find container \"26b9b1ae191dcd793a9cedc15d8033b82095515f544e162d5f1cd0d67eb5f825\": not found" Jan 30 13:00:00.306115 kubelet[2520]: I0130 13:00:00.305816 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d024604-d426-488e-9afc-7400f94be40e" path="/var/lib/kubelet/pods/5d024604-d426-488e-9afc-7400f94be40e/volumes" Jan 30 13:00:00.306461 kubelet[2520]: I0130 13:00:00.306218 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" path="/var/lib/kubelet/pods/c75a59fe-ab84-4816-aafc-90fc0848b961/volumes" Jan 30 13:00:00.507976 sshd[4154]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:00.519134 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:46462.service: Deactivated successfully. Jan 30 13:00:00.520985 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:00:00.523151 systemd[1]: session-23.scope: Consumed 1.435s CPU time. Jan 30 13:00:00.524363 systemd-logind[1410]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:00:00.531424 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:46492.service - OpenSSH per-connection server daemon (10.0.0.1:46492). Jan 30 13:00:00.532406 systemd-logind[1410]: Removed session 23. Jan 30 13:00:00.563535 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 46492 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:00.565121 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:00.569182 systemd-logind[1410]: New session 24 of user core. Jan 30 13:00:00.578282 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:00:01.347051 sshd[4315]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:01.355897 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:46492.service: Deactivated successfully. Jan 30 13:00:01.360007 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:00:01.362285 systemd-logind[1410]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:00:01.369998 kubelet[2520]: I0130 13:00:01.369933 2520 topology_manager.go:215] "Topology Admit Handler" podUID="38be6e59-47ea-4691-b793-ea7bc1f1d1c3" podNamespace="kube-system" podName="cilium-sqgcn" Jan 30 13:00:01.369998 kubelet[2520]: E0130 13:00:01.369999 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" containerName="apply-sysctl-overwrites" Jan 30 13:00:01.369998 kubelet[2520]: E0130 13:00:01.370010 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d024604-d426-488e-9afc-7400f94be40e" containerName="cilium-operator" Jan 30 13:00:01.370397 kubelet[2520]: E0130 13:00:01.370016 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" containerName="clean-cilium-state" Jan 30 13:00:01.370397 kubelet[2520]: E0130 13:00:01.370022 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" containerName="cilium-agent" Jan 30 13:00:01.370397 kubelet[2520]: E0130 13:00:01.370029 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" containerName="mount-cgroup" Jan 30 13:00:01.370397 kubelet[2520]: E0130 13:00:01.370035 2520 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" containerName="mount-bpf-fs" Jan 30 13:00:01.370397 kubelet[2520]: I0130 13:00:01.370055 2520 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d024604-d426-488e-9afc-7400f94be40e" containerName="cilium-operator" Jan 30 13:00:01.370397 kubelet[2520]: I0130 13:00:01.370061 2520 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75a59fe-ab84-4816-aafc-90fc0848b961" containerName="cilium-agent" Jan 30 13:00:01.379199 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:46504.service - OpenSSH per-connection server daemon (10.0.0.1:46504). Jan 30 13:00:01.383036 systemd-logind[1410]: Removed session 24. Jan 30 13:00:01.388906 systemd[1]: Created slice kubepods-burstable-pod38be6e59_47ea_4691_b793_ea7bc1f1d1c3.slice - libcontainer container kubepods-burstable-pod38be6e59_47ea_4691_b793_ea7bc1f1d1c3.slice. Jan 30 13:00:01.422137 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 46504 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:01.424413 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:01.429702 systemd-logind[1410]: New session 25 of user core. Jan 30 13:00:01.441294 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:00:01.471600 kubelet[2520]: I0130 13:00:01.471545 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-cilium-cgroup\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471600 kubelet[2520]: I0130 13:00:01.471597 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-cilium-run\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471820 kubelet[2520]: I0130 13:00:01.471615 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-lib-modules\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471820 kubelet[2520]: I0130 13:00:01.471631 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-clustermesh-secrets\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471820 kubelet[2520]: I0130 13:00:01.471649 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-bpf-maps\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471820 kubelet[2520]: I0130 13:00:01.471663 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-xtables-lock\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471820 kubelet[2520]: I0130 13:00:01.471678 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-cni-path\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471820 kubelet[2520]: I0130 13:00:01.471695 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-cilium-ipsec-secrets\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471953 kubelet[2520]: I0130 13:00:01.471711 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-etc-cni-netd\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471953 kubelet[2520]: I0130 13:00:01.471729 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7qw\" (UniqueName: \"kubernetes.io/projected/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-kube-api-access-mw7qw\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471953 kubelet[2520]: I0130 13:00:01.471746 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-hostproc\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471953 kubelet[2520]: I0130 13:00:01.471760 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-cilium-config-path\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.471953 kubelet[2520]: I0130 13:00:01.471774 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-host-proc-sys-net\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.472056 kubelet[2520]: I0130 13:00:01.471787 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-host-proc-sys-kernel\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.472056 kubelet[2520]: I0130 13:00:01.471803 2520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38be6e59-47ea-4691-b793-ea7bc1f1d1c3-hubble-tls\") pod \"cilium-sqgcn\" (UID: \"38be6e59-47ea-4691-b793-ea7bc1f1d1c3\") " pod="kube-system/cilium-sqgcn" Jan 30 13:00:01.495191 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:01.505724 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:46504.service: Deactivated successfully. Jan 30 13:00:01.508493 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:00:01.509815 systemd-logind[1410]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:00:01.511125 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:46506.service - OpenSSH per-connection server daemon (10.0.0.1:46506). Jan 30 13:00:01.512144 systemd-logind[1410]: Removed session 25. Jan 30 13:00:01.544913 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 46506 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 13:00:01.546341 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:00:01.550640 systemd-logind[1410]: New session 26 of user core. Jan 30 13:00:01.556276 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:00:01.692656 kubelet[2520]: E0130 13:00:01.692623 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:01.693177 containerd[1431]: time="2025-01-30T13:00:01.693136661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sqgcn,Uid:38be6e59-47ea-4691-b793-ea7bc1f1d1c3,Namespace:kube-system,Attempt:0,}" Jan 30 13:00:01.730034 containerd[1431]: time="2025-01-30T13:00:01.729916423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:00:01.730275 containerd[1431]: time="2025-01-30T13:00:01.730006304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:00:01.730275 containerd[1431]: time="2025-01-30T13:00:01.730022745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:01.730387 containerd[1431]: time="2025-01-30T13:00:01.730127427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:00:01.748251 systemd[1]: Started cri-containerd-d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185.scope - libcontainer container d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185. Jan 30 13:00:01.766112 containerd[1431]: time="2025-01-30T13:00:01.766046731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sqgcn,Uid:38be6e59-47ea-4691-b793-ea7bc1f1d1c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\"" Jan 30 13:00:01.766787 kubelet[2520]: E0130 13:00:01.766763 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:01.769876 containerd[1431]: time="2025-01-30T13:00:01.769840449Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:00:01.780225 containerd[1431]: time="2025-01-30T13:00:01.780156863Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47\"" Jan 30 13:00:01.781552 containerd[1431]: time="2025-01-30T13:00:01.781526011Z" level=info msg="StartContainer for \"4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47\"" Jan 30 13:00:01.806246 systemd[1]: Started cri-containerd-4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47.scope - libcontainer container 4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47. Jan 30 13:00:01.825241 containerd[1431]: time="2025-01-30T13:00:01.825192115Z" level=info msg="StartContainer for \"4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47\" returns successfully" Jan 30 13:00:01.837682 systemd[1]: cri-containerd-4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47.scope: Deactivated successfully. Jan 30 13:00:01.876008 containerd[1431]: time="2025-01-30T13:00:01.875938886Z" level=info msg="shim disconnected" id=4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47 namespace=k8s.io Jan 30 13:00:01.876008 containerd[1431]: time="2025-01-30T13:00:01.875997767Z" level=warning msg="cleaning up after shim disconnected" id=4b9e9e68055d2f205f9e5411d9ab13a4e43e397a8eba5214580e5369e8a8cf47 namespace=k8s.io Jan 30 13:00:01.876008 containerd[1431]: time="2025-01-30T13:00:01.876006487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:00:02.304026 kubelet[2520]: E0130 13:00:02.303946 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:02.367733 kubelet[2520]: E0130 13:00:02.367697 2520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:00:02.617536 kubelet[2520]: E0130 13:00:02.617508 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:02.621683 containerd[1431]: time="2025-01-30T13:00:02.620887148Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:00:02.630835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273713963.mount: Deactivated successfully. Jan 30 13:00:02.633608 containerd[1431]: time="2025-01-30T13:00:02.631762166Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4\"" Jan 30 13:00:02.633608 containerd[1431]: time="2025-01-30T13:00:02.632687185Z" level=info msg="StartContainer for \"18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4\"" Jan 30 13:00:02.665294 systemd[1]: Started cri-containerd-18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4.scope - libcontainer container 18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4. Jan 30 13:00:02.689564 containerd[1431]: time="2025-01-30T13:00:02.689506564Z" level=info msg="StartContainer for \"18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4\" returns successfully" Jan 30 13:00:02.693378 systemd[1]: cri-containerd-18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4.scope: Deactivated successfully. Jan 30 13:00:02.713314 containerd[1431]: time="2025-01-30T13:00:02.713249120Z" level=info msg="shim disconnected" id=18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4 namespace=k8s.io Jan 30 13:00:02.713314 containerd[1431]: time="2025-01-30T13:00:02.713304882Z" level=warning msg="cleaning up after shim disconnected" id=18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4 namespace=k8s.io Jan 30 13:00:02.713314 containerd[1431]: time="2025-01-30T13:00:02.713314522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:00:03.576787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18384a1ed3abeb9fb3b2cbd3983c7f52443306dc43961934b19bf2995ea44be4-rootfs.mount: Deactivated successfully. Jan 30 13:00:03.618599 kubelet[2520]: I0130 13:00:03.617870 2520 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:00:03Z","lastTransitionTime":"2025-01-30T13:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:00:03.621934 kubelet[2520]: E0130 13:00:03.621731 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:03.627433 containerd[1431]: time="2025-01-30T13:00:03.625176100Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:00:03.639180 containerd[1431]: time="2025-01-30T13:00:03.638942727Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de\"" Jan 30 13:00:03.640259 containerd[1431]: time="2025-01-30T13:00:03.640163191Z" level=info msg="StartContainer for \"d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de\"" Jan 30 13:00:03.682279 systemd[1]: Started cri-containerd-d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de.scope - libcontainer container d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de. Jan 30 13:00:03.714419 containerd[1431]: time="2025-01-30T13:00:03.714366953Z" level=info msg="StartContainer for \"d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de\" returns successfully" Jan 30 13:00:03.716410 systemd[1]: cri-containerd-d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de.scope: Deactivated successfully. Jan 30 13:00:03.739690 containerd[1431]: time="2025-01-30T13:00:03.739633444Z" level=info msg="shim disconnected" id=d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de namespace=k8s.io Jan 30 13:00:03.739690 containerd[1431]: time="2025-01-30T13:00:03.739687125Z" level=warning msg="cleaning up after shim disconnected" id=d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de namespace=k8s.io Jan 30 13:00:03.739690 containerd[1431]: time="2025-01-30T13:00:03.739696005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:00:04.577460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2838910761cfc1e8a653a1de3ae7e370fa14f1c70ab26d49e67e239f9b5c2de-rootfs.mount: Deactivated successfully. Jan 30 13:00:04.626794 kubelet[2520]: E0130 13:00:04.625566 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:04.629755 containerd[1431]: time="2025-01-30T13:00:04.629667196Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:00:04.648151 containerd[1431]: time="2025-01-30T13:00:04.647754936Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33\"" Jan 30 13:00:04.649280 containerd[1431]: time="2025-01-30T13:00:04.649234164Z" level=info msg="StartContainer for \"8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33\"" Jan 30 13:00:04.682301 systemd[1]: Started cri-containerd-8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33.scope - libcontainer container 8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33. Jan 30 13:00:04.706583 systemd[1]: cri-containerd-8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33.scope: Deactivated successfully. Jan 30 13:00:04.708891 containerd[1431]: time="2025-01-30T13:00:04.708852406Z" level=info msg="StartContainer for \"8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33\" returns successfully" Jan 30 13:00:04.731902 containerd[1431]: time="2025-01-30T13:00:04.731836879Z" level=info msg="shim disconnected" id=8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33 namespace=k8s.io Jan 30 13:00:04.731902 containerd[1431]: time="2025-01-30T13:00:04.731897520Z" level=warning msg="cleaning up after shim disconnected" id=8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33 namespace=k8s.io Jan 30 13:00:04.731902 containerd[1431]: time="2025-01-30T13:00:04.731908600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:00:05.576768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e43debc19b5ee22079dd8568df7bfae8977199a703ac5224aa0ed44b6861b33-rootfs.mount: Deactivated successfully. Jan 30 13:00:05.629803 kubelet[2520]: E0130 13:00:05.629693 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:05.637495 containerd[1431]: time="2025-01-30T13:00:05.637348909Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:00:05.653527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3712781967.mount: Deactivated successfully. Jan 30 13:00:05.654204 containerd[1431]: time="2025-01-30T13:00:05.654039174Z" level=info msg="CreateContainer within sandbox \"d6a9d8582ef2b97df4dd0e7c2a2a8e038c300781df67ea225362036f5281c185\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0eabb7bb597669a67de11ad16215d033de718eaf2d10f6c9881a94a82a725817\"" Jan 30 13:00:05.655106 containerd[1431]: time="2025-01-30T13:00:05.655059632Z" level=info msg="StartContainer for \"0eabb7bb597669a67de11ad16215d033de718eaf2d10f6c9881a94a82a725817\"" Jan 30 13:00:05.691713 systemd[1]: Started cri-containerd-0eabb7bb597669a67de11ad16215d033de718eaf2d10f6c9881a94a82a725817.scope - libcontainer container 0eabb7bb597669a67de11ad16215d033de718eaf2d10f6c9881a94a82a725817. Jan 30 13:00:05.727584 containerd[1431]: time="2025-01-30T13:00:05.727315630Z" level=info msg="StartContainer for \"0eabb7bb597669a67de11ad16215d033de718eaf2d10f6c9881a94a82a725817\" returns successfully" Jan 30 13:00:06.068091 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:00:06.636599 kubelet[2520]: E0130 13:00:06.636549 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:06.651703 kubelet[2520]: I0130 13:00:06.651639 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sqgcn" podStartSLOduration=5.651621274 podStartE2EDuration="5.651621274s" podCreationTimestamp="2025-01-30 13:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:00:06.651497871 +0000 UTC m=+84.454623229" watchObservedRunningTime="2025-01-30 13:00:06.651621274 +0000 UTC m=+84.454746632" Jan 30 13:00:07.693614 kubelet[2520]: E0130 13:00:07.693580 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:08.970599 systemd-networkd[1355]: lxc_health: Link UP Jan 30 13:00:08.980558 systemd-networkd[1355]: lxc_health: Gained carrier Jan 30 13:00:09.304888 kubelet[2520]: E0130 13:00:09.304306 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:09.696267 kubelet[2520]: E0130 13:00:09.696172 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:10.306521 kubelet[2520]: E0130 13:00:10.306239 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:10.621347 systemd-networkd[1355]: lxc_health: Gained IPv6LL Jan 30 13:00:10.643908 kubelet[2520]: E0130 13:00:10.643613 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:11.647864 kubelet[2520]: E0130 13:00:11.647806 2520 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:00:14.433177 kubelet[2520]: E0130 13:00:14.432937 2520 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57388->127.0.0.1:33939: write tcp 127.0.0.1:57388->127.0.0.1:33939: write: broken pipe Jan 30 13:00:14.437183 sshd[4337]: pam_unix(sshd:session): session closed for user core Jan 30 13:00:14.441681 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:46506.service: Deactivated successfully. Jan 30 13:00:14.444134 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:00:14.445063 systemd-logind[1410]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:00:14.446138 systemd-logind[1410]: Removed session 26.