Oct 8 19:49:33.899644 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:49:33.899665 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:49:33.899674 kernel: KASLR enabled Oct 8 19:49:33.899680 kernel: efi: EFI v2.7 by EDK II Oct 8 19:49:33.899686 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 19:49:33.899692 kernel: random: crng init done Oct 8 19:49:33.899699 kernel: ACPI: Early table checksum verification disabled Oct 8 19:49:33.899704 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 19:49:33.899711 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:49:33.899718 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899724 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899730 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899736 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899742 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899750 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899757 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899764 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899770 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:49:33.899777 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 19:49:33.899783 kernel: NUMA: Failed to initialise from firmware Oct 8 19:49:33.899789 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:49:33.899795 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 8 19:49:33.899802 kernel: Zone ranges: Oct 8 19:49:33.899808 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:49:33.899814 kernel: DMA32 empty Oct 8 19:49:33.899821 kernel: Normal empty Oct 8 19:49:33.899828 kernel: Movable zone start for each node Oct 8 19:49:33.899834 kernel: Early memory node ranges Oct 8 19:49:33.899840 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 19:49:33.899846 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 19:49:33.899853 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 19:49:33.899859 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 19:49:33.899866 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 19:49:33.899872 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 19:49:33.899878 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 19:49:33.899893 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:49:33.899899 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 19:49:33.899907 kernel: psci: probing for conduit method from ACPI. Oct 8 19:49:33.899914 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:49:33.899920 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:49:33.899929 kernel: psci: Trusted OS migration not required Oct 8 19:49:33.899936 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:49:33.899943 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:49:33.899951 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:49:33.899958 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:49:33.899964 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 19:49:33.899971 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:49:33.899987 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:49:33.899994 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:49:33.900000 kernel: CPU features: detected: Spectre-v4 Oct 8 19:49:33.900007 kernel: CPU features: detected: Spectre-BHB Oct 8 19:49:33.900026 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:49:33.900034 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:49:33.900043 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:49:33.900049 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:49:33.900056 kernel: alternatives: applying boot alternatives Oct 8 19:49:33.900064 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:49:33.900071 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:49:33.900077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:49:33.900085 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:49:33.900091 kernel: Fallback order for Node 0: 0 Oct 8 19:49:33.900098 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 19:49:33.900105 kernel: Policy zone: DMA Oct 8 19:49:33.900111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:49:33.900119 kernel: software IO TLB: area num 4. Oct 8 19:49:33.900126 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 19:49:33.900133 kernel: Memory: 2386784K/2572288K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 185504K reserved, 0K cma-reserved) Oct 8 19:49:33.900140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:49:33.900146 kernel: trace event string verifier disabled Oct 8 19:49:33.900153 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:49:33.900160 kernel: rcu: RCU event tracing is enabled. Oct 8 19:49:33.900167 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:49:33.900174 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:49:33.900181 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:49:33.900187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:49:33.900194 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:49:33.900202 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:49:33.900209 kernel: GICv3: 256 SPIs implemented Oct 8 19:49:33.900215 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:49:33.900222 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:49:33.900229 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:49:33.900235 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:49:33.900242 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:49:33.900249 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:49:33.900256 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:49:33.900262 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 19:49:33.900269 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 19:49:33.900277 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:49:33.900284 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:49:33.900291 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:49:33.900297 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:49:33.900304 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:49:33.900311 kernel: arm-pv: using stolen time PV Oct 8 19:49:33.900318 kernel: Console: colour dummy device 80x25 Oct 8 19:49:33.900325 kernel: ACPI: Core revision 20230628 Oct 8 19:49:33.900332 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:49:33.900339 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:49:33.900347 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:49:33.900354 kernel: SELinux: Initializing. Oct 8 19:49:33.900360 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:49:33.900368 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:49:33.900374 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:49:33.900382 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:49:33.900388 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:49:33.900395 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:49:33.900402 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:49:33.900409 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:49:33.900417 kernel: Remapping and enabling EFI services. Oct 8 19:49:33.900424 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:49:33.900431 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:49:33.900438 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:49:33.900445 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 19:49:33.900452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:49:33.900459 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:49:33.900465 kernel: Detected PIPT I-cache on CPU2 Oct 8 19:49:33.900472 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 19:49:33.900481 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 19:49:33.900488 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:49:33.900499 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 19:49:33.900507 kernel: Detected PIPT I-cache on CPU3 Oct 8 19:49:33.900514 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 19:49:33.900522 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 19:49:33.900529 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:49:33.900536 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 19:49:33.900543 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:49:33.900552 kernel: SMP: Total of 4 processors activated. Oct 8 19:49:33.900559 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:49:33.900566 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:49:33.900574 kernel: CPU features: detected: Common not Private translations Oct 8 19:49:33.900581 kernel: CPU features: detected: CRC32 instructions Oct 8 19:49:33.900588 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:49:33.900595 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:49:33.900603 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:49:33.900611 kernel: CPU features: detected: Privileged Access Never Oct 8 19:49:33.900619 kernel: CPU features: detected: RAS Extension Support Oct 8 19:49:33.900626 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:49:33.900633 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:49:33.900641 kernel: alternatives: applying system-wide alternatives Oct 8 19:49:33.900648 kernel: devtmpfs: initialized Oct 8 19:49:33.900655 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:49:33.900663 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:49:33.900670 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:49:33.900678 kernel: SMBIOS 3.0.0 present. Oct 8 19:49:33.900686 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 19:49:33.900693 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:49:33.900701 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:49:33.900708 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:49:33.900715 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:49:33.900723 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:49:33.900730 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Oct 8 19:49:33.900737 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:49:33.900746 kernel: cpuidle: using governor menu Oct 8 19:49:33.900753 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:49:33.900760 kernel: ASID allocator initialised with 32768 entries Oct 8 19:49:33.900768 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:49:33.900775 kernel: Serial: AMBA PL011 UART driver Oct 8 19:49:33.900782 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:49:33.900789 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:49:33.900797 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:49:33.900804 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:49:33.900813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:49:33.900820 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:49:33.900827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:49:33.900834 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:49:33.900842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:49:33.900849 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:49:33.900857 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:49:33.900864 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:49:33.900871 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:49:33.900879 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:49:33.900891 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:49:33.900898 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:49:33.900906 kernel: ACPI: Interpreter enabled Oct 8 19:49:33.900913 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:49:33.900920 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:49:33.900928 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:49:33.900935 kernel: printk: console [ttyAMA0] enabled Oct 8 19:49:33.900942 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:49:33.901190 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:49:33.901280 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:49:33.901345 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:49:33.901409 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:49:33.901473 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:49:33.901483 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:49:33.901491 kernel: PCI host bridge to bus 0000:00 Oct 8 19:49:33.901561 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:49:33.901618 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:49:33.901674 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:49:33.901750 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:49:33.901831 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:49:33.901919 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:49:33.902022 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 19:49:33.902094 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 19:49:33.902160 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:49:33.902225 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:49:33.902291 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 19:49:33.902356 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 19:49:33.902417 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:49:33.902475 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:49:33.902534 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:49:33.902543 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:49:33.902551 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:49:33.902558 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:49:33.902566 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:49:33.902573 kernel: iommu: Default domain type: Translated Oct 8 19:49:33.902580 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:49:33.902588 kernel: efivars: Registered efivars operations Oct 8 19:49:33.902596 kernel: vgaarb: loaded Oct 8 19:49:33.902604 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:49:33.902611 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:49:33.902618 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:49:33.902626 kernel: pnp: PnP ACPI init Oct 8 19:49:33.902698 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:49:33.902709 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:49:33.902717 kernel: NET: Registered PF_INET protocol family Oct 8 19:49:33.902726 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:49:33.902733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:49:33.902740 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:49:33.902748 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:49:33.902755 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:49:33.902763 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:49:33.902770 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:49:33.902777 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:49:33.902784 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:49:33.902806 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:49:33.902814 kernel: kvm [1]: HYP mode not available Oct 8 19:49:33.902821 kernel: Initialise system trusted keyrings Oct 8 19:49:33.902828 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:49:33.902836 kernel: Key type asymmetric registered Oct 8 19:49:33.902843 kernel: Asymmetric key parser 'x509' registered Oct 8 19:49:33.902850 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:49:33.902857 kernel: io scheduler mq-deadline registered Oct 8 19:49:33.902865 kernel: io scheduler kyber registered Oct 8 19:49:33.902873 kernel: io scheduler bfq registered Oct 8 19:49:33.902888 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:49:33.902896 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:49:33.902904 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:49:33.903006 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 19:49:33.903053 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:49:33.903062 kernel: thunder_xcv, ver 1.0 Oct 8 19:49:33.903069 kernel: thunder_bgx, ver 1.0 Oct 8 19:49:33.903077 kernel: nicpf, ver 1.0 Oct 8 19:49:33.903084 kernel: nicvf, ver 1.0 Oct 8 19:49:33.903173 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:49:33.903236 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:49:33 UTC (1728416973) Oct 8 19:49:33.903246 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:49:33.903253 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:49:33.903261 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:49:33.903268 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:49:33.903275 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:49:33.903286 kernel: Segment Routing with IPv6 Oct 8 19:49:33.903293 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:49:33.903301 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:49:33.903308 kernel: Key type dns_resolver registered Oct 8 19:49:33.903315 kernel: registered taskstats version 1 Oct 8 19:49:33.903322 kernel: Loading compiled-in X.509 certificates Oct 8 19:49:33.903330 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:49:33.903337 kernel: Key type .fscrypt registered Oct 8 19:49:33.903344 kernel: Key type fscrypt-provisioning registered Oct 8 19:49:33.903351 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:49:33.903360 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:49:33.903367 kernel: ima: No architecture policies found Oct 8 19:49:33.903375 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:49:33.903382 kernel: clk: Disabling unused clocks Oct 8 19:49:33.903389 kernel: Freeing unused kernel memory: 39104K Oct 8 19:49:33.903397 kernel: Run /init as init process Oct 8 19:49:33.903404 kernel: with arguments: Oct 8 19:49:33.903411 kernel: /init Oct 8 19:49:33.903419 kernel: with environment: Oct 8 19:49:33.903426 kernel: HOME=/ Oct 8 19:49:33.903434 kernel: TERM=linux Oct 8 19:49:33.903441 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:49:33.903449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:49:33.903459 systemd[1]: Detected virtualization kvm. Oct 8 19:49:33.903467 systemd[1]: Detected architecture arm64. Oct 8 19:49:33.903474 systemd[1]: Running in initrd. Oct 8 19:49:33.903483 systemd[1]: No hostname configured, using default hostname. Oct 8 19:49:33.903491 systemd[1]: Hostname set to . Oct 8 19:49:33.903499 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:49:33.903507 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:49:33.903515 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:49:33.903522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:49:33.903531 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:49:33.903539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:49:33.903548 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:49:33.903556 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:49:33.903565 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:49:33.903573 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:49:33.903581 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:49:33.903589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:49:33.903597 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:49:33.903606 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:49:33.903614 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:49:33.903622 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:49:33.903630 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:49:33.903638 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:49:33.903645 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:49:33.903654 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:49:33.903662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:49:33.903671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:49:33.903679 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:49:33.903686 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:49:33.903694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:49:33.903702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:49:33.903710 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:49:33.903717 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:49:33.903725 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:49:33.903733 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:49:33.903743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:49:33.903750 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:49:33.903758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:49:33.903766 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:49:33.903775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:49:33.903784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:49:33.903807 systemd-journald[237]: Collecting audit messages is disabled. Oct 8 19:49:33.903825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:49:33.903835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:49:33.903844 systemd-journald[237]: Journal started Oct 8 19:49:33.903862 systemd-journald[237]: Runtime Journal (/run/log/journal/8c1ec2365d864cefb5c132d60d6fac00) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:49:33.894715 systemd-modules-load[239]: Inserted module 'overlay' Oct 8 19:49:33.905238 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:49:33.908688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:49:33.911605 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:49:33.914207 kernel: Bridge firewalling registered Oct 8 19:49:33.912547 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 8 19:49:33.914147 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:49:33.915176 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:49:33.929166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:49:33.930136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:49:33.931730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:49:33.934764 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:49:33.938181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:49:33.939807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:49:33.942426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:49:33.950368 dracut-cmdline[274]: dracut-dracut-053 Oct 8 19:49:33.953007 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:49:33.979770 systemd-resolved[281]: Positive Trust Anchors: Oct 8 19:49:33.979792 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:49:33.979825 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:49:33.984840 systemd-resolved[281]: Defaulting to hostname 'linux'. Oct 8 19:49:33.985829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:49:33.986721 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:49:34.024006 kernel: SCSI subsystem initialized Oct 8 19:49:34.028991 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:49:34.036007 kernel: iscsi: registered transport (tcp) Oct 8 19:49:34.051011 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:49:34.051031 kernel: QLogic iSCSI HBA Driver Oct 8 19:49:34.090880 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:49:34.105159 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:49:34.122380 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:49:34.122440 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:49:34.122462 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:49:34.168004 kernel: raid6: neonx8 gen() 15657 MB/s Oct 8 19:49:34.184993 kernel: raid6: neonx4 gen() 15660 MB/s Oct 8 19:49:34.201999 kernel: raid6: neonx2 gen() 13271 MB/s Oct 8 19:49:34.218990 kernel: raid6: neonx1 gen() 10483 MB/s Oct 8 19:49:34.235988 kernel: raid6: int64x8 gen() 6965 MB/s Oct 8 19:49:34.252994 kernel: raid6: int64x4 gen() 7313 MB/s Oct 8 19:49:34.269993 kernel: raid6: int64x2 gen() 6109 MB/s Oct 8 19:49:34.286995 kernel: raid6: int64x1 gen() 5028 MB/s Oct 8 19:49:34.287022 kernel: raid6: using algorithm neonx4 gen() 15660 MB/s Oct 8 19:49:34.304010 kernel: raid6: .... xor() 12131 MB/s, rmw enabled Oct 8 19:49:34.304026 kernel: raid6: using neon recovery algorithm Oct 8 19:49:34.308993 kernel: xor: measuring software checksum speed Oct 8 19:49:34.309010 kernel: 8regs : 19812 MB/sec Oct 8 19:49:34.310412 kernel: 32regs : 17658 MB/sec Oct 8 19:49:34.310428 kernel: arm64_neon : 26883 MB/sec Oct 8 19:49:34.310438 kernel: xor: using function: arm64_neon (26883 MB/sec) Oct 8 19:49:34.362003 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:49:34.372708 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:49:34.381149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:49:34.395889 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 8 19:49:34.399208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:49:34.400908 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:49:34.417077 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Oct 8 19:49:34.446022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:49:34.455223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:49:34.495532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:49:34.504054 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:49:34.515030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:49:34.519330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:49:34.521540 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:49:34.522441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:49:34.531569 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 19:49:34.531758 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:49:34.537238 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:49:34.543547 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:49:34.543601 kernel: GPT:9289727 != 19775487 Oct 8 19:49:34.543611 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:49:34.543622 kernel: GPT:9289727 != 19775487 Oct 8 19:49:34.544330 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:49:34.544367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:49:34.547606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:49:34.547681 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:49:34.551436 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:49:34.553450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:49:34.558622 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (511) Oct 8 19:49:34.553509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:49:34.557294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:49:34.563326 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (513) Oct 8 19:49:34.574290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:49:34.576156 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:49:34.585295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:49:34.592780 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:49:34.597055 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:49:34.605758 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:49:34.606649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:49:34.611787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:49:34.627122 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:49:34.628597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:49:34.633142 disk-uuid[552]: Primary Header is updated. Oct 8 19:49:34.633142 disk-uuid[552]: Secondary Entries is updated. Oct 8 19:49:34.633142 disk-uuid[552]: Secondary Header is updated. Oct 8 19:49:34.636003 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:49:34.653042 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:49:35.649742 disk-uuid[553]: The operation has completed successfully. Oct 8 19:49:35.651156 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:49:35.668659 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:49:35.668755 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:49:35.692138 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:49:35.694905 sh[576]: Success Oct 8 19:49:35.708039 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:49:35.734273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:49:35.751871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:49:35.755035 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:49:35.764593 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:49:35.764632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:49:35.764643 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:49:35.764653 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:49:35.765984 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:49:35.769303 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:49:35.770292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:49:35.789222 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:49:35.790936 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:49:35.798267 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:49:35.798311 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:49:35.798322 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:49:35.801012 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:49:35.808474 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:49:35.810000 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:49:35.815114 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:49:35.820155 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:49:35.884382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:49:35.895185 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:49:35.915694 ignition[666]: Ignition 2.18.0 Oct 8 19:49:35.915704 ignition[666]: Stage: fetch-offline Oct 8 19:49:35.915748 ignition[666]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:49:35.915756 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:49:35.915848 ignition[666]: parsed url from cmdline: "" Oct 8 19:49:35.915851 ignition[666]: no config URL provided Oct 8 19:49:35.915856 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:49:35.915862 ignition[666]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:49:35.915897 ignition[666]: op(1): [started] loading QEMU firmware config module Oct 8 19:49:35.915902 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:49:35.924408 systemd-networkd[769]: lo: Link UP Oct 8 19:49:35.924419 systemd-networkd[769]: lo: Gained carrier Oct 8 19:49:35.925085 systemd-networkd[769]: Enumeration completed Oct 8 19:49:35.925581 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:49:35.925584 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:49:35.926388 systemd-networkd[769]: eth0: Link UP Oct 8 19:49:35.926391 systemd-networkd[769]: eth0: Gained carrier Oct 8 19:49:35.926397 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:49:35.931184 ignition[666]: op(1): [finished] loading QEMU firmware config module Oct 8 19:49:35.928421 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:49:35.930106 systemd[1]: Reached target network.target - Network. Oct 8 19:49:35.947014 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:49:35.971787 ignition[666]: parsing config with SHA512: 31f5b6d075b7b81c319a2871b93d72082a0b0a29245d7e28681aed5eba87d264c8b80242a4d96e136123cc07c757bdf800825a9686b5e7797b0e103df2f7b25c Oct 8 19:49:35.975762 unknown[666]: fetched base config from "system" Oct 8 19:49:35.975771 unknown[666]: fetched user config from "qemu" Oct 8 19:49:35.976212 ignition[666]: fetch-offline: fetch-offline passed Oct 8 19:49:35.976278 ignition[666]: Ignition finished successfully Oct 8 19:49:35.978378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:49:35.979479 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:49:35.991152 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:49:36.001477 ignition[776]: Ignition 2.18.0 Oct 8 19:49:36.001486 ignition[776]: Stage: kargs Oct 8 19:49:36.001642 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:49:36.001651 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:49:36.002559 ignition[776]: kargs: kargs passed Oct 8 19:49:36.002603 ignition[776]: Ignition finished successfully Oct 8 19:49:36.005730 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:49:36.017151 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:49:36.026953 ignition[786]: Ignition 2.18.0 Oct 8 19:49:36.026962 ignition[786]: Stage: disks Oct 8 19:49:36.027122 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:49:36.027131 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:49:36.027950 ignition[786]: disks: disks passed Oct 8 19:49:36.028010 ignition[786]: Ignition finished successfully Oct 8 19:49:36.030173 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:49:36.031857 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:49:36.032719 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:49:36.034230 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:49:36.035590 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:49:36.036887 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:49:36.038966 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:49:36.053465 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:49:36.056657 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:49:36.068087 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:49:36.108716 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:49:36.109999 kernel: EXT4-fs (vda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:49:36.109951 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:49:36.121056 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:49:36.122969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:49:36.123885 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:49:36.123925 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:49:36.123949 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:49:36.129277 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:49:36.131210 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:49:36.133996 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Oct 8 19:49:36.135014 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:49:36.135045 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:49:36.135056 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:49:36.138020 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:49:36.139021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:49:36.176967 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:49:36.179932 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:49:36.183642 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:49:36.186382 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:49:36.251806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:49:36.262174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:49:36.263403 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:49:36.267992 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:49:36.281902 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:49:36.283344 ignition[918]: INFO : Ignition 2.18.0 Oct 8 19:49:36.283344 ignition[918]: INFO : Stage: mount Oct 8 19:49:36.285100 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:49:36.285100 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:49:36.285100 ignition[918]: INFO : mount: mount passed Oct 8 19:49:36.285100 ignition[918]: INFO : Ignition finished successfully Oct 8 19:49:36.285945 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:49:36.296092 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:49:36.763331 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:49:36.773154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:49:36.778010 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Oct 8 19:49:36.780005 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:49:36.780023 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:49:36.780034 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:49:36.783015 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:49:36.783373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:49:36.799505 ignition[948]: INFO : Ignition 2.18.0 Oct 8 19:49:36.799505 ignition[948]: INFO : Stage: files Oct 8 19:49:36.800707 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:49:36.800707 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:49:36.800707 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:49:36.803472 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:49:36.803472 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:49:36.803472 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:49:36.803472 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:49:36.807236 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:49:36.807236 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:49:36.807236 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:49:36.803613 unknown[948]: wrote ssh authorized keys file for user: core Oct 8 19:49:36.911481 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:49:37.079049 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:49:37.079049 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:49:37.082019 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 8 19:49:37.413829 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:49:37.495332 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:49:37.496784 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:49:37.514231 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Oct 8 19:49:37.729081 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 19:49:37.891600 systemd-networkd[769]: eth0: Gained IPv6LL Oct 8 19:49:37.921307 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:49:37.921307 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 19:49:37.924120 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:49:37.942907 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:49:37.946345 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:49:37.948117 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:49:37.948117 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:49:37.948117 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:49:37.948117 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:49:37.948117 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:49:37.948117 ignition[948]: INFO : files: files passed Oct 8 19:49:37.948117 ignition[948]: INFO : Ignition finished successfully Oct 8 19:49:37.948972 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:49:37.962109 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:49:37.964828 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:49:37.967118 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:49:37.968048 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:49:37.972695 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:49:37.975256 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:49:37.975256 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:49:37.977451 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:49:37.977245 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:49:37.978433 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:49:37.987354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:49:38.005207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:49:38.005331 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:49:38.007078 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:49:38.008535 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:49:38.009776 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:49:38.010499 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:49:38.024746 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:49:38.037126 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:49:38.044411 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:49:38.045336 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:49:38.046802 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:49:38.048222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:49:38.048332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:49:38.050124 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:49:38.051537 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:49:38.052718 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:49:38.053944 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:49:38.055354 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:49:38.056804 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:49:38.058120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:49:38.059521 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:49:38.060894 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:49:38.062134 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:49:38.063283 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:49:38.063398 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:49:38.065088 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:49:38.066461 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:49:38.067780 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:49:38.071069 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:49:38.072859 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:49:38.072998 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:49:38.074956 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:49:38.075080 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:49:38.076561 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:49:38.077782 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:49:38.081043 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:49:38.082056 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:49:38.083585 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:49:38.084777 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:49:38.084867 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:49:38.085955 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:49:38.086054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:49:38.087141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:49:38.087246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:49:38.088493 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:49:38.088587 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:49:38.101166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:49:38.102586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:49:38.102731 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:49:38.105460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:49:38.106797 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:49:38.106940 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:49:38.108686 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:49:38.108809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:49:38.112753 ignition[1004]: INFO : Ignition 2.18.0 Oct 8 19:49:38.112753 ignition[1004]: INFO : Stage: umount Oct 8 19:49:38.114800 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:49:38.114800 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:49:38.114800 ignition[1004]: INFO : umount: umount passed Oct 8 19:49:38.114800 ignition[1004]: INFO : Ignition finished successfully Oct 8 19:49:38.114518 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:49:38.114601 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:49:38.119243 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:49:38.119766 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:49:38.119861 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:49:38.121345 systemd[1]: Stopped target network.target - Network. Oct 8 19:49:38.122248 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:49:38.122318 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:49:38.123385 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:49:38.123426 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:49:38.124628 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:49:38.124664 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:49:38.125950 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:49:38.126003 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:49:38.128326 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:49:38.129363 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:49:38.137544 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:49:38.137673 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:49:38.139382 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:49:38.139431 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:49:38.139438 systemd-networkd[769]: eth0: DHCPv6 lease lost Oct 8 19:49:38.141338 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:49:38.142152 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:49:38.144245 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:49:38.144298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:49:38.155117 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:49:38.155902 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:49:38.155968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:49:38.157820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:49:38.157868 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:49:38.159275 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:49:38.159320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:49:38.161083 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:49:38.173431 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:49:38.173551 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:49:38.178546 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:49:38.178686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:49:38.181218 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:49:38.181266 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:49:38.182081 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:49:38.182110 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:49:38.183775 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:49:38.183826 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:49:38.186542 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:49:38.186588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:49:38.189677 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:49:38.189724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:49:38.198123 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:49:38.198918 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:49:38.198968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:49:38.200705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:49:38.200746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:49:38.202472 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:49:38.202554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:49:38.203792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:49:38.203863 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:49:38.205739 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:49:38.207127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:49:38.207187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:49:38.209141 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:49:38.218715 systemd[1]: Switching root. Oct 8 19:49:38.243084 systemd-journald[237]: Journal stopped Oct 8 19:49:38.912140 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 8 19:49:38.912194 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:49:38.912206 kernel: SELinux: policy capability open_perms=1 Oct 8 19:49:38.912216 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:49:38.912226 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:49:38.912236 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:49:38.912245 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:49:38.912255 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:49:38.912267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:49:38.912277 systemd[1]: Successfully loaded SELinux policy in 29.870ms. Oct 8 19:49:38.912299 kernel: audit: type=1403 audit(1728416978.408:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:49:38.912310 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.074ms. Oct 8 19:49:38.912321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:49:38.912332 systemd[1]: Detected virtualization kvm. Oct 8 19:49:38.912342 systemd[1]: Detected architecture arm64. Oct 8 19:49:38.912352 systemd[1]: Detected first boot. Oct 8 19:49:38.912363 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:49:38.912375 zram_generator::config[1048]: No configuration found. Oct 8 19:49:38.912386 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:49:38.912396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:49:38.912406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:49:38.912416 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:49:38.912427 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:49:38.912438 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:49:38.912448 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:49:38.912460 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:49:38.912471 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:49:38.912481 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:49:38.912492 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:49:38.912502 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:49:38.912513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:49:38.912524 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:49:38.912535 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:49:38.912545 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:49:38.912558 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:49:38.912572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:49:38.912582 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:49:38.912593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:49:38.912603 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:49:38.912613 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:49:38.912624 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:49:38.912634 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:49:38.912646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:49:38.912656 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:49:38.912667 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:49:38.912677 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:49:38.912687 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:49:38.912698 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:49:38.912708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:49:38.912718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:49:38.912729 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:49:38.912740 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:49:38.912752 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:49:38.912762 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:49:38.912773 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:49:38.912783 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:49:38.912793 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:49:38.912804 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:49:38.912814 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:49:38.912825 systemd[1]: Reached target machines.target - Containers. Oct 8 19:49:38.912836 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:49:38.912847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:49:38.912857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:49:38.912867 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:49:38.912885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:49:38.912898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:49:38.912908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:49:38.912919 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:49:38.912931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:49:38.912942 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:49:38.912952 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:49:38.912963 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:49:38.912973 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:49:38.912997 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:49:38.913009 kernel: loop: module loaded Oct 8 19:49:38.913019 kernel: fuse: init (API version 7.39) Oct 8 19:49:38.913029 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:49:38.913041 kernel: ACPI: bus type drm_connector registered Oct 8 19:49:38.913051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:49:38.913062 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:49:38.913073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:49:38.913084 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:49:38.913109 systemd-journald[1120]: Collecting audit messages is disabled. Oct 8 19:49:38.913134 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:49:38.913144 systemd[1]: Stopped verity-setup.service. Oct 8 19:49:38.913156 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:49:38.913166 systemd-journald[1120]: Journal started Oct 8 19:49:38.913186 systemd-journald[1120]: Runtime Journal (/run/log/journal/8c1ec2365d864cefb5c132d60d6fac00) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:49:38.751508 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:49:38.764538 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:49:38.764887 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:49:38.915385 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:49:38.915940 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:49:38.916897 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:49:38.917730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:49:38.918680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:49:38.919585 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:49:38.920493 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:49:38.921555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:49:38.922724 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:49:38.922863 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:49:38.923967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:49:38.924127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:49:38.925153 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:49:38.925266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:49:38.926242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:49:38.926367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:49:38.927462 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:49:38.927589 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:49:38.928623 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:49:38.928759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:49:38.929774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:49:38.930852 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:49:38.932183 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:49:38.943350 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:49:38.958096 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:49:38.959774 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:49:38.960658 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:49:38.960698 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:49:38.962317 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:49:38.964073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:49:38.965762 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:49:38.966645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:49:38.968243 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:49:38.969851 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:49:38.970769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:49:38.972142 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:49:38.975963 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:49:38.977384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:49:38.982054 systemd-journald[1120]: Time spent on flushing to /var/log/journal/8c1ec2365d864cefb5c132d60d6fac00 is 14.647ms for 856 entries. Oct 8 19:49:38.982054 systemd-journald[1120]: System Journal (/var/log/journal/8c1ec2365d864cefb5c132d60d6fac00) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:49:39.016278 systemd-journald[1120]: Received client request to flush runtime journal. Oct 8 19:49:39.016327 kernel: loop0: detected capacity change from 0 to 113672 Oct 8 19:49:39.016349 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:49:38.982293 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:49:38.984828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:49:38.989020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:49:38.990227 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:49:38.991270 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:49:38.992445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:49:38.993772 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:49:38.998201 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:49:39.007260 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:49:39.010189 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:49:39.019850 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:49:39.026859 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:49:39.027272 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:49:39.033804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:49:39.035560 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:49:39.036234 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:49:39.042509 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:49:39.049185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:49:39.057003 kernel: loop1: detected capacity change from 0 to 194096 Oct 8 19:49:39.067473 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 8 19:49:39.067488 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 8 19:49:39.073614 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:49:39.088368 kernel: loop2: detected capacity change from 0 to 59688 Oct 8 19:49:39.112141 kernel: loop3: detected capacity change from 0 to 113672 Oct 8 19:49:39.116009 kernel: loop4: detected capacity change from 0 to 194096 Oct 8 19:49:39.121053 kernel: loop5: detected capacity change from 0 to 59688 Oct 8 19:49:39.123831 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:49:39.124228 (sd-merge)[1184]: Merged extensions into '/usr'. Oct 8 19:49:39.128087 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:49:39.128104 systemd[1]: Reloading... Oct 8 19:49:39.181003 zram_generator::config[1212]: No configuration found. Oct 8 19:49:39.252480 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:49:39.271100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:49:39.308354 systemd[1]: Reloading finished in 179 ms. Oct 8 19:49:39.346648 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:49:39.347832 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:49:39.361212 systemd[1]: Starting ensure-sysext.service... Oct 8 19:49:39.362827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:49:39.372963 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:49:39.372991 systemd[1]: Reloading... Oct 8 19:49:39.380436 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:49:39.380689 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:49:39.381333 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:49:39.381541 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Oct 8 19:49:39.381589 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Oct 8 19:49:39.383859 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:49:39.383878 systemd-tmpfiles[1244]: Skipping /boot Oct 8 19:49:39.390607 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:49:39.390623 systemd-tmpfiles[1244]: Skipping /boot Oct 8 19:49:39.420013 zram_generator::config[1270]: No configuration found. Oct 8 19:49:39.493325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:49:39.529948 systemd[1]: Reloading finished in 156 ms. Oct 8 19:49:39.544740 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:49:39.562352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:49:39.569502 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:49:39.571539 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:49:39.573449 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:49:39.579592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:49:39.588563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:49:39.590816 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:49:39.593908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:49:39.595090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:49:39.597325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:49:39.601408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:49:39.602255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:49:39.606258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:49:39.607882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:49:39.608027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:49:39.610492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:49:39.610603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:49:39.612165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:49:39.617776 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:49:39.617934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:49:39.621737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:49:39.624228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:49:39.624251 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Oct 8 19:49:39.629429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:49:39.631693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:49:39.635129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:49:39.639223 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:49:39.640816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:49:39.643828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:49:39.645379 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:49:39.646748 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:49:39.646881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:49:39.648129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:49:39.648241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:49:39.649550 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:49:39.650207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:49:39.653573 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:49:39.665755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:49:39.669867 systemd[1]: Finished ensure-sysext.service. Oct 8 19:49:39.680683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:49:39.683998 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1336) Oct 8 19:49:39.681779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:49:39.683694 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:49:39.685412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:49:39.689214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:49:39.690033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:49:39.691546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:49:39.695423 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:49:39.698240 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:49:39.698743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:49:39.698857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:49:39.700048 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:49:39.700169 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:49:39.701182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:49:39.701294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:49:39.702419 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:49:39.707003 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1346) Oct 8 19:49:39.707246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:49:39.712173 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:49:39.712370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:49:39.717719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:49:39.725917 augenrules[1378]: No rules Oct 8 19:49:39.727316 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:49:39.768839 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:49:39.771132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:49:39.772105 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:49:39.781430 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:49:39.782447 systemd-networkd[1372]: lo: Link UP Oct 8 19:49:39.782461 systemd-networkd[1372]: lo: Gained carrier Oct 8 19:49:39.783233 systemd-networkd[1372]: Enumeration completed Oct 8 19:49:39.783308 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:49:39.784401 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:49:39.784407 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:49:39.785299 systemd-resolved[1309]: Positive Trust Anchors: Oct 8 19:49:39.785621 systemd-networkd[1372]: eth0: Link UP Oct 8 19:49:39.785739 systemd-networkd[1372]: eth0: Gained carrier Oct 8 19:49:39.785755 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:49:39.788025 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:49:39.788057 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:49:39.788151 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:49:39.797725 systemd-resolved[1309]: Defaulting to hostname 'linux'. Oct 8 19:49:39.805044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:49:39.806841 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:49:39.808112 systemd[1]: Reached target network.target - Network. Oct 8 19:49:39.808117 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:49:39.808951 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Oct 8 19:49:39.808995 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:49:39.809859 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:49:39.809972 systemd-timesyncd[1373]: Initial clock synchronization to Tue 2024-10-08 19:49:39.649409 UTC. Oct 8 19:49:39.832265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:49:39.853082 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:49:39.867243 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:49:39.868375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:49:39.877293 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:49:39.916304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:49:39.917528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:49:39.920082 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:49:39.920973 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:49:39.921954 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:49:39.923176 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:49:39.924157 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:49:39.925329 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:49:39.926313 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:49:39.926346 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:49:39.927104 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:49:39.928661 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:49:39.930770 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:49:39.940821 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:49:39.942794 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:49:39.944224 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:49:39.945247 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:49:39.946087 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:49:39.946864 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:49:39.946903 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:49:39.947800 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:49:39.949586 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:49:39.952098 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:49:39.952521 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:49:39.957830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:49:39.958774 jq[1413]: false Oct 8 19:49:39.958659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:49:39.959658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:49:39.962316 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:49:39.965927 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:49:39.969163 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:49:39.973956 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:49:39.976188 extend-filesystems[1414]: Found loop3 Oct 8 19:49:39.977068 extend-filesystems[1414]: Found loop4 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found loop5 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda1 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda2 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda3 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found usr Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda4 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda6 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda7 Oct 8 19:49:39.977598 extend-filesystems[1414]: Found vda9 Oct 8 19:49:39.977598 extend-filesystems[1414]: Checking size of /dev/vda9 Oct 8 19:49:39.977317 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:49:39.977686 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:49:39.978269 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:49:39.985124 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:49:39.987258 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:49:39.990553 dbus-daemon[1412]: [system] SELinux support is enabled Oct 8 19:49:39.991282 jq[1428]: true Oct 8 19:49:39.992268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:49:39.996225 extend-filesystems[1414]: Resized partition /dev/vda9 Oct 8 19:49:39.997947 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:49:39.998120 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:49:39.998361 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:49:39.998496 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:49:39.999797 extend-filesystems[1434]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:49:40.002257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:49:40.002413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:49:40.005994 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:49:40.013194 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:49:40.013240 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:49:40.014993 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:49:40.015021 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:49:40.017247 jq[1436]: true Oct 8 19:49:40.018134 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:49:40.026593 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:49:40.027720 tar[1435]: linux-arm64/helm Oct 8 19:49:40.027879 update_engine[1427]: I1008 19:49:40.027024 1427 main.cc:92] Flatcar Update Engine starting Oct 8 19:49:40.031060 systemd-logind[1425]: New seat seat0. Oct 8 19:49:40.046088 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1340) Oct 8 19:49:40.047682 update_engine[1427]: I1008 19:49:40.047178 1427 update_check_scheduler.cc:74] Next update check in 8m27s Oct 8 19:49:40.062535 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:49:40.049327 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:49:40.057380 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:49:40.065100 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:49:40.068407 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:49:40.068407 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:49:40.068407 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:49:40.074468 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Oct 8 19:49:40.068951 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:49:40.071053 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:49:40.113880 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:49:40.116049 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:49:40.117459 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:49:40.118208 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:49:40.232452 containerd[1437]: time="2024-10-08T19:49:40.231579503Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:49:40.256578 containerd[1437]: time="2024-10-08T19:49:40.256321362Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:49:40.256578 containerd[1437]: time="2024-10-08T19:49:40.256363108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.257665 containerd[1437]: time="2024-10-08T19:49:40.257628777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:49:40.257665 containerd[1437]: time="2024-10-08T19:49:40.257662017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.257867 containerd[1437]: time="2024-10-08T19:49:40.257842210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:49:40.257867 containerd[1437]: time="2024-10-08T19:49:40.257865416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:49:40.257997 containerd[1437]: time="2024-10-08T19:49:40.257931621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258034 containerd[1437]: time="2024-10-08T19:49:40.257994966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258034 containerd[1437]: time="2024-10-08T19:49:40.258007587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258069 containerd[1437]: time="2024-10-08T19:49:40.258060544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258246 containerd[1437]: time="2024-10-08T19:49:40.258227293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258273 containerd[1437]: time="2024-10-08T19:49:40.258249949Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:49:40.258273 containerd[1437]: time="2024-10-08T19:49:40.258260337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258363 containerd[1437]: time="2024-10-08T19:49:40.258345789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:49:40.258363 containerd[1437]: time="2024-10-08T19:49:40.258362056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:49:40.258421 containerd[1437]: time="2024-10-08T19:49:40.258408663Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:49:40.258440 containerd[1437]: time="2024-10-08T19:49:40.258420344Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:49:40.261643 containerd[1437]: time="2024-10-08T19:49:40.261615736Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:49:40.261643 containerd[1437]: time="2024-10-08T19:49:40.261646114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:49:40.261735 containerd[1437]: time="2024-10-08T19:49:40.261661010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:49:40.261735 containerd[1437]: time="2024-10-08T19:49:40.261690212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:49:40.261735 containerd[1437]: time="2024-10-08T19:49:40.261704167Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:49:40.261735 containerd[1437]: time="2024-10-08T19:49:40.261713692Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:49:40.261735 containerd[1437]: time="2024-10-08T19:49:40.261725451Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:49:40.261899 containerd[1437]: time="2024-10-08T19:49:40.261831247Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:49:40.261899 containerd[1437]: time="2024-10-08T19:49:40.261852297Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:49:40.261899 containerd[1437]: time="2024-10-08T19:49:40.261865898Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:49:40.261899 containerd[1437]: time="2024-10-08T19:49:40.261885929Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:49:40.261899 containerd[1437]: time="2024-10-08T19:49:40.261899570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.261915210Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.261927557Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.261939512Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.261952173Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.261973262Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.262038997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262085 containerd[1437]: time="2024-10-08T19:49:40.262050404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:49:40.262206 containerd[1437]: time="2024-10-08T19:49:40.262141383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:49:40.262413 containerd[1437]: time="2024-10-08T19:49:40.262392839Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:49:40.262456 containerd[1437]: time="2024-10-08T19:49:40.262420042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.262456 containerd[1437]: time="2024-10-08T19:49:40.262434742Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:49:40.262497 containerd[1437]: time="2024-10-08T19:49:40.262455085Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:49:40.263137 containerd[1437]: time="2024-10-08T19:49:40.263093153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263137 containerd[1437]: time="2024-10-08T19:49:40.263122904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263137 containerd[1437]: time="2024-10-08T19:49:40.263135565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263251 containerd[1437]: time="2024-10-08T19:49:40.263146619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263251 containerd[1437]: time="2024-10-08T19:49:40.263158809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263251 containerd[1437]: time="2024-10-08T19:49:40.263178409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263251 containerd[1437]: time="2024-10-08T19:49:40.263190286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263251 containerd[1437]: time="2024-10-08T19:49:40.263201300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263251 containerd[1437]: time="2024-10-08T19:49:40.263213765Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263341473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263364913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263378358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263395331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263407130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263419477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263430452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263463 containerd[1437]: time="2024-10-08T19:49:40.263440605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:49:40.263800 containerd[1437]: time="2024-10-08T19:49:40.263740588Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:49:40.263800 containerd[1437]: time="2024-10-08T19:49:40.263798523Z" level=info msg="Connect containerd service" Oct 8 19:49:40.263939 containerd[1437]: time="2024-10-08T19:49:40.263824080Z" level=info msg="using legacy CRI server" Oct 8 19:49:40.263939 containerd[1437]: time="2024-10-08T19:49:40.263831449Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:49:40.264015 containerd[1437]: time="2024-10-08T19:49:40.263966840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:49:40.264579 containerd[1437]: time="2024-10-08T19:49:40.264549637Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:49:40.264627 containerd[1437]: time="2024-10-08T19:49:40.264606161Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:49:40.264647 containerd[1437]: time="2024-10-08T19:49:40.264623056Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:49:40.264647 containerd[1437]: time="2024-10-08T19:49:40.264632816Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:49:40.264647 containerd[1437]: time="2024-10-08T19:49:40.264644379Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:49:40.265306 containerd[1437]: time="2024-10-08T19:49:40.264788119Z" level=info msg="Start subscribing containerd event" Oct 8 19:49:40.265402 containerd[1437]: time="2024-10-08T19:49:40.265387968Z" level=info msg="Start recovering state" Oct 8 19:49:40.265528 containerd[1437]: time="2024-10-08T19:49:40.265511442Z" level=info msg="Start event monitor" Oct 8 19:49:40.266478 containerd[1437]: time="2024-10-08T19:49:40.265655691Z" level=info msg="Start snapshots syncer" Oct 8 19:49:40.266478 containerd[1437]: time="2024-10-08T19:49:40.265673801Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:49:40.266478 containerd[1437]: time="2024-10-08T19:49:40.265681013Z" level=info msg="Start streaming server" Oct 8 19:49:40.266478 containerd[1437]: time="2024-10-08T19:49:40.265154347Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:49:40.266478 containerd[1437]: time="2024-10-08T19:49:40.265886764Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:49:40.266478 containerd[1437]: time="2024-10-08T19:49:40.265934821Z" level=info msg="containerd successfully booted in 0.035237s" Oct 8 19:49:40.266041 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:49:40.371199 tar[1435]: linux-arm64/LICENSE Oct 8 19:49:40.371394 tar[1435]: linux-arm64/README.md Oct 8 19:49:40.383765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:49:40.788197 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:49:40.806440 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:49:40.820499 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:49:40.825289 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:49:40.825497 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:49:40.828159 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:49:40.839310 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:49:40.841951 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:49:40.844070 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:49:40.845393 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:49:41.219161 systemd-networkd[1372]: eth0: Gained IPv6LL Oct 8 19:49:41.221512 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:49:41.223345 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:49:41.234214 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:49:41.236310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:49:41.238083 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:49:41.252100 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:49:41.252305 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:49:41.253611 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:49:41.260455 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:49:41.718671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:49:41.719860 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:49:41.722520 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:49:41.724374 systemd[1]: Startup finished in 543ms (kernel) + 4.706s (initrd) + 3.351s (userspace) = 8.601s. Oct 8 19:49:42.170730 kubelet[1524]: E1008 19:49:42.170623 1524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:49:42.172959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:49:42.173116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:49:46.079743 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:49:46.080828 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Oct 8 19:49:46.134619 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:46.136257 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.144203 systemd-logind[1425]: New session 1 of user core. Oct 8 19:49:46.145189 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:49:46.154196 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:49:46.162713 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:49:46.166271 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:49:46.171837 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.242003 systemd[1543]: Queued start job for default target default.target. Oct 8 19:49:46.250829 systemd[1543]: Created slice app.slice - User Application Slice. Oct 8 19:49:46.250859 systemd[1543]: Reached target paths.target - Paths. Oct 8 19:49:46.250870 systemd[1543]: Reached target timers.target - Timers. Oct 8 19:49:46.252014 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:49:46.261523 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:49:46.261590 systemd[1543]: Reached target sockets.target - Sockets. Oct 8 19:49:46.261601 systemd[1543]: Reached target basic.target - Basic System. Oct 8 19:49:46.261636 systemd[1543]: Reached target default.target - Main User Target. Oct 8 19:49:46.261660 systemd[1543]: Startup finished in 85ms. Oct 8 19:49:46.261863 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:49:46.263111 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:49:46.325419 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:45426.service - OpenSSH per-connection server daemon (10.0.0.1:45426). Oct 8 19:49:46.370506 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 45426 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:46.371690 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.375662 systemd-logind[1425]: New session 2 of user core. Oct 8 19:49:46.388125 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:49:46.439626 sshd[1554]: pam_unix(sshd:session): session closed for user core Oct 8 19:49:46.453241 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:45426.service: Deactivated successfully. Oct 8 19:49:46.454487 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:49:46.456119 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:49:46.457003 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:45440.service - OpenSSH per-connection server daemon (10.0.0.1:45440). Oct 8 19:49:46.457893 systemd-logind[1425]: Removed session 2. Oct 8 19:49:46.491362 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 45440 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:46.492521 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.495846 systemd-logind[1425]: New session 3 of user core. Oct 8 19:49:46.513099 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:49:46.559507 sshd[1561]: pam_unix(sshd:session): session closed for user core Oct 8 19:49:46.568212 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:45440.service: Deactivated successfully. Oct 8 19:49:46.569465 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:49:46.572120 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:49:46.573119 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:45442.service - OpenSSH per-connection server daemon (10.0.0.1:45442). Oct 8 19:49:46.575335 systemd-logind[1425]: Removed session 3. Oct 8 19:49:46.608764 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 45442 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:46.609911 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.613363 systemd-logind[1425]: New session 4 of user core. Oct 8 19:49:46.623118 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:49:46.674180 sshd[1568]: pam_unix(sshd:session): session closed for user core Oct 8 19:49:46.685162 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:45442.service: Deactivated successfully. Oct 8 19:49:46.686474 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:49:46.687636 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:49:46.695186 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:45446.service - OpenSSH per-connection server daemon (10.0.0.1:45446). Oct 8 19:49:46.695901 systemd-logind[1425]: Removed session 4. Oct 8 19:49:46.726538 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 45446 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:46.727649 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.731400 systemd-logind[1425]: New session 5 of user core. Oct 8 19:49:46.743106 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:49:46.807504 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:49:46.809587 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:49:46.825610 sudo[1578]: pam_unix(sudo:session): session closed for user root Oct 8 19:49:46.827162 sshd[1575]: pam_unix(sshd:session): session closed for user core Oct 8 19:49:46.837245 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:45446.service: Deactivated successfully. Oct 8 19:49:46.838542 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:49:46.839810 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:49:46.840960 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:45456.service - OpenSSH per-connection server daemon (10.0.0.1:45456). Oct 8 19:49:46.841699 systemd-logind[1425]: Removed session 5. Oct 8 19:49:46.876373 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 45456 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:46.877695 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:46.881376 systemd-logind[1425]: New session 6 of user core. Oct 8 19:49:46.894115 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:49:46.944788 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:49:46.945051 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:49:46.947910 sudo[1587]: pam_unix(sudo:session): session closed for user root Oct 8 19:49:46.952141 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:49:46.952605 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:49:46.975266 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:49:46.976298 auditctl[1590]: No rules Oct 8 19:49:46.977111 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:49:46.978058 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:49:46.979593 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:49:47.000875 augenrules[1608]: No rules Oct 8 19:49:47.002010 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:49:47.003168 sudo[1586]: pam_unix(sudo:session): session closed for user root Oct 8 19:49:47.004611 sshd[1583]: pam_unix(sshd:session): session closed for user core Oct 8 19:49:47.011132 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:45456.service: Deactivated successfully. Oct 8 19:49:47.012470 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:49:47.014136 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:49:47.014676 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:45466.service - OpenSSH per-connection server daemon (10.0.0.1:45466). Oct 8 19:49:47.015354 systemd-logind[1425]: Removed session 6. Oct 8 19:49:47.049483 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 45466 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:49:47.050622 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:47.053875 systemd-logind[1425]: New session 7 of user core. Oct 8 19:49:47.064105 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:49:47.112879 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:49:47.113164 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:49:47.221301 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:49:47.221836 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:49:47.447846 dockerd[1629]: time="2024-10-08T19:49:47.447457068Z" level=info msg="Starting up" Oct 8 19:49:47.537047 dockerd[1629]: time="2024-10-08T19:49:47.536931153Z" level=info msg="Loading containers: start." Oct 8 19:49:47.625995 kernel: Initializing XFRM netlink socket Oct 8 19:49:47.681827 systemd-networkd[1372]: docker0: Link UP Oct 8 19:49:47.695119 dockerd[1629]: time="2024-10-08T19:49:47.695083653Z" level=info msg="Loading containers: done." Oct 8 19:49:47.753480 dockerd[1629]: time="2024-10-08T19:49:47.753428303Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:49:47.753626 dockerd[1629]: time="2024-10-08T19:49:47.753602481Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:49:47.753741 dockerd[1629]: time="2024-10-08T19:49:47.753710543Z" level=info msg="Daemon has completed initialization" Oct 8 19:49:47.777111 dockerd[1629]: time="2024-10-08T19:49:47.777057134Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:49:47.777715 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:49:48.328222 containerd[1437]: time="2024-10-08T19:49:48.328172468Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 19:49:49.016863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75913996.mount: Deactivated successfully. Oct 8 19:49:50.139429 containerd[1437]: time="2024-10-08T19:49:50.139265697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:50.140268 containerd[1437]: time="2024-10-08T19:49:50.140004023Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=29945964" Oct 8 19:49:50.140850 containerd[1437]: time="2024-10-08T19:49:50.140818942Z" level=info msg="ImageCreate event name:\"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:50.143735 containerd[1437]: time="2024-10-08T19:49:50.143702057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:50.144928 containerd[1437]: time="2024-10-08T19:49:50.144882677Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"29942762\" in 1.816666577s" Oct 8 19:49:50.144928 containerd[1437]: time="2024-10-08T19:49:50.144921591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\"" Oct 8 19:49:50.162712 containerd[1437]: time="2024-10-08T19:49:50.162686605Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 19:49:52.071400 containerd[1437]: time="2024-10-08T19:49:52.071321859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:52.072008 containerd[1437]: time="2024-10-08T19:49:52.071961586Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=26885775" Oct 8 19:49:52.078767 containerd[1437]: time="2024-10-08T19:49:52.078719668Z" level=info msg="ImageCreate event name:\"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:52.081698 containerd[1437]: time="2024-10-08T19:49:52.081646243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:52.082684 containerd[1437]: time="2024-10-08T19:49:52.082599658Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"28373587\" in 1.919877065s" Oct 8 19:49:52.082684 containerd[1437]: time="2024-10-08T19:49:52.082635313Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\"" Oct 8 19:49:52.103284 containerd[1437]: time="2024-10-08T19:49:52.103247867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 19:49:52.423405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:49:52.434169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:49:52.523455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:49:52.527421 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:49:52.567180 kubelet[1849]: E1008 19:49:52.567122 1849 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:49:52.570436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:49:52.570580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:49:53.140914 containerd[1437]: time="2024-10-08T19:49:53.140861302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:53.141433 containerd[1437]: time="2024-10-08T19:49:53.141400630Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=16154274" Oct 8 19:49:53.142337 containerd[1437]: time="2024-10-08T19:49:53.142287924Z" level=info msg="ImageCreate event name:\"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:53.145172 containerd[1437]: time="2024-10-08T19:49:53.145144835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:53.146322 containerd[1437]: time="2024-10-08T19:49:53.146276383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"17642104\" in 1.04298923s" Oct 8 19:49:53.146322 containerd[1437]: time="2024-10-08T19:49:53.146309266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\"" Oct 8 19:49:53.164240 containerd[1437]: time="2024-10-08T19:49:53.164134705Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 19:49:54.177489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557919096.mount: Deactivated successfully. Oct 8 19:49:55.468995 containerd[1437]: time="2024-10-08T19:49:55.468737622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:55.469853 containerd[1437]: time="2024-10-08T19:49:55.469662832Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=25648343" Oct 8 19:49:55.470711 containerd[1437]: time="2024-10-08T19:49:55.470673451Z" level=info msg="ImageCreate event name:\"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:55.472597 containerd[1437]: time="2024-10-08T19:49:55.472569269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:55.473402 containerd[1437]: time="2024-10-08T19:49:55.473325817Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"25647360\" in 2.309152043s" Oct 8 19:49:55.473402 containerd[1437]: time="2024-10-08T19:49:55.473359486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\"" Oct 8 19:49:55.491582 containerd[1437]: time="2024-10-08T19:49:55.491532156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:49:56.105279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641702030.mount: Deactivated successfully. Oct 8 19:49:57.699776 containerd[1437]: time="2024-10-08T19:49:57.699505081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:57.700777 containerd[1437]: time="2024-10-08T19:49:57.700555382Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 19:49:57.701698 containerd[1437]: time="2024-10-08T19:49:57.701656457Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:57.704728 containerd[1437]: time="2024-10-08T19:49:57.704663855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:57.705921 containerd[1437]: time="2024-10-08T19:49:57.705882965Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.21430744s" Oct 8 19:49:57.705959 containerd[1437]: time="2024-10-08T19:49:57.705927473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:49:57.725254 containerd[1437]: time="2024-10-08T19:49:57.725151338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:49:58.144439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205091923.mount: Deactivated successfully. Oct 8 19:49:58.148570 containerd[1437]: time="2024-10-08T19:49:58.148519973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:58.149624 containerd[1437]: time="2024-10-08T19:49:58.149577134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 8 19:49:58.150484 containerd[1437]: time="2024-10-08T19:49:58.150435696Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:58.153247 containerd[1437]: time="2024-10-08T19:49:58.153009583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:58.154740 containerd[1437]: time="2024-10-08T19:49:58.154705823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 429.515963ms" Oct 8 19:49:58.154944 containerd[1437]: time="2024-10-08T19:49:58.154859664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:49:58.173791 containerd[1437]: time="2024-10-08T19:49:58.173752765Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 19:49:59.662263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992971921.mount: Deactivated successfully. Oct 8 19:50:01.752564 containerd[1437]: time="2024-10-08T19:50:01.752486168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:01.753003 containerd[1437]: time="2024-10-08T19:50:01.752963388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Oct 8 19:50:01.753926 containerd[1437]: time="2024-10-08T19:50:01.753894376Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:01.758377 containerd[1437]: time="2024-10-08T19:50:01.758323672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:01.759783 containerd[1437]: time="2024-10-08T19:50:01.759744466Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.58595017s" Oct 8 19:50:01.759829 containerd[1437]: time="2024-10-08T19:50:01.759784856Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Oct 8 19:50:02.786467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:50:02.793202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:02.883672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:02.888061 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:50:02.927660 kubelet[2068]: E1008 19:50:02.927612 2068 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:50:02.930042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:50:02.930200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:50:07.316543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:07.326195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:07.345386 systemd[1]: Reloading requested from client PID 2084 ('systemctl') (unit session-7.scope)... Oct 8 19:50:07.345403 systemd[1]: Reloading... Oct 8 19:50:07.414080 zram_generator::config[2121]: No configuration found. Oct 8 19:50:07.529085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:50:07.582040 systemd[1]: Reloading finished in 236 ms. Oct 8 19:50:07.627709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:07.631057 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:50:07.631268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:07.633121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:07.738140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:07.742784 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:50:07.784054 kubelet[2168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:50:07.784054 kubelet[2168]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:50:07.784054 kubelet[2168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:50:07.785651 kubelet[2168]: I1008 19:50:07.785588 2168 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:50:08.212229 kubelet[2168]: I1008 19:50:08.212181 2168 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:50:08.212229 kubelet[2168]: I1008 19:50:08.212221 2168 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:50:08.212456 kubelet[2168]: I1008 19:50:08.212426 2168 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:50:08.238623 kubelet[2168]: E1008 19:50:08.238595 2168 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.239024 kubelet[2168]: I1008 19:50:08.238958 2168 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:50:08.249495 kubelet[2168]: I1008 19:50:08.249465 2168 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:50:08.250516 kubelet[2168]: I1008 19:50:08.250473 2168 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:50:08.250705 kubelet[2168]: I1008 19:50:08.250518 2168 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:50:08.250793 kubelet[2168]: I1008 19:50:08.250766 2168 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:50:08.250793 kubelet[2168]: I1008 19:50:08.250776 2168 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:50:08.251076 kubelet[2168]: I1008 19:50:08.251054 2168 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:50:08.252131 kubelet[2168]: I1008 19:50:08.252087 2168 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:50:08.252131 kubelet[2168]: I1008 19:50:08.252115 2168 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:50:08.252636 kubelet[2168]: I1008 19:50:08.252423 2168 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:50:08.252636 kubelet[2168]: I1008 19:50:08.252538 2168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:50:08.253043 kubelet[2168]: W1008 19:50:08.252969 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.253123 kubelet[2168]: E1008 19:50:08.253054 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.253202 kubelet[2168]: W1008 19:50:08.253024 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.253283 kubelet[2168]: E1008 19:50:08.253268 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.253678 kubelet[2168]: I1008 19:50:08.253643 2168 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:50:08.254042 kubelet[2168]: I1008 19:50:08.254024 2168 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:50:08.254140 kubelet[2168]: W1008 19:50:08.254130 2168 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:50:08.254937 kubelet[2168]: I1008 19:50:08.254917 2168 server.go:1264] "Started kubelet" Oct 8 19:50:08.255172 kubelet[2168]: I1008 19:50:08.255132 2168 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:50:08.255812 kubelet[2168]: I1008 19:50:08.255303 2168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:50:08.255812 kubelet[2168]: I1008 19:50:08.255563 2168 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:50:08.257909 kubelet[2168]: I1008 19:50:08.256361 2168 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:50:08.257909 kubelet[2168]: E1008 19:50:08.256758 2168 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc921c88c6c35c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:50:08.25489494 +0000 UTC m=+0.508492797,LastTimestamp:2024-10-08 19:50:08.25489494 +0000 UTC m=+0.508492797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:50:08.258087 kubelet[2168]: I1008 19:50:08.258056 2168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:50:08.259134 kubelet[2168]: E1008 19:50:08.259109 2168 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:50:08.259228 kubelet[2168]: I1008 19:50:08.259218 2168 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:50:08.259320 kubelet[2168]: I1008 19:50:08.259303 2168 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:50:08.261066 kubelet[2168]: I1008 19:50:08.261018 2168 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:50:08.262071 kubelet[2168]: W1008 19:50:08.261887 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.262071 kubelet[2168]: E1008 19:50:08.261996 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.262180 kubelet[2168]: I1008 19:50:08.262091 2168 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:50:08.262180 kubelet[2168]: E1008 19:50:08.262160 2168 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:50:08.262251 kubelet[2168]: I1008 19:50:08.262203 2168 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:50:08.262842 kubelet[2168]: E1008 19:50:08.262805 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Oct 8 19:50:08.263896 kubelet[2168]: I1008 19:50:08.263876 2168 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:50:08.276629 kubelet[2168]: I1008 19:50:08.276468 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:50:08.277834 kubelet[2168]: I1008 19:50:08.277802 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:50:08.277996 kubelet[2168]: I1008 19:50:08.277970 2168 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:50:08.278027 kubelet[2168]: I1008 19:50:08.278007 2168 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:50:08.278081 kubelet[2168]: E1008 19:50:08.278055 2168 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:50:08.278791 kubelet[2168]: W1008 19:50:08.278651 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.278791 kubelet[2168]: E1008 19:50:08.278706 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:08.280647 kubelet[2168]: I1008 19:50:08.280620 2168 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:50:08.280647 kubelet[2168]: I1008 19:50:08.280640 2168 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:50:08.280761 kubelet[2168]: I1008 19:50:08.280659 2168 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:50:08.343231 kubelet[2168]: I1008 19:50:08.343179 2168 policy_none.go:49] "None policy: Start" Oct 8 19:50:08.343998 kubelet[2168]: I1008 19:50:08.343953 2168 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:50:08.344074 kubelet[2168]: I1008 19:50:08.344034 2168 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:50:08.350369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:50:08.360767 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:50:08.361476 kubelet[2168]: I1008 19:50:08.361429 2168 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:50:08.361850 kubelet[2168]: E1008 19:50:08.361807 2168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Oct 8 19:50:08.363949 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:50:08.373878 kubelet[2168]: I1008 19:50:08.373847 2168 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:50:08.374114 kubelet[2168]: I1008 19:50:08.374075 2168 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:50:08.374425 kubelet[2168]: I1008 19:50:08.374189 2168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:50:08.375662 kubelet[2168]: E1008 19:50:08.375636 2168 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:50:08.379032 kubelet[2168]: I1008 19:50:08.378974 2168 topology_manager.go:215] "Topology Admit Handler" podUID="d355b4785f09447f165c1c39bcc3708d" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:50:08.380302 kubelet[2168]: I1008 19:50:08.380238 2168 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:50:08.381322 kubelet[2168]: I1008 19:50:08.381278 2168 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:50:08.387323 systemd[1]: Created slice kubepods-burstable-podd355b4785f09447f165c1c39bcc3708d.slice - libcontainer container kubepods-burstable-podd355b4785f09447f165c1c39bcc3708d.slice. Oct 8 19:50:08.405524 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 8 19:50:08.419514 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 8 19:50:08.462974 kubelet[2168]: I1008 19:50:08.462284 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:08.462974 kubelet[2168]: I1008 19:50:08.462332 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:08.462974 kubelet[2168]: I1008 19:50:08.462356 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:08.462974 kubelet[2168]: I1008 19:50:08.462372 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:08.462974 kubelet[2168]: I1008 19:50:08.462388 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d355b4785f09447f165c1c39bcc3708d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d355b4785f09447f165c1c39bcc3708d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:08.463190 kubelet[2168]: I1008 19:50:08.462403 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d355b4785f09447f165c1c39bcc3708d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d355b4785f09447f165c1c39bcc3708d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:08.463190 kubelet[2168]: I1008 19:50:08.462422 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:50:08.463190 kubelet[2168]: I1008 19:50:08.462437 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d355b4785f09447f165c1c39bcc3708d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d355b4785f09447f165c1c39bcc3708d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:08.463190 kubelet[2168]: I1008 19:50:08.462451 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:08.463693 kubelet[2168]: E1008 19:50:08.463544 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Oct 8 19:50:08.563556 kubelet[2168]: I1008 19:50:08.563528 2168 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:50:08.564034 kubelet[2168]: E1008 19:50:08.564000 2168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Oct 8 19:50:08.704609 kubelet[2168]: E1008 19:50:08.704535 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:08.705265 containerd[1437]: time="2024-10-08T19:50:08.705225072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d355b4785f09447f165c1c39bcc3708d,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:08.718796 kubelet[2168]: E1008 19:50:08.718446 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:08.719414 containerd[1437]: time="2024-10-08T19:50:08.719122364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:08.722197 kubelet[2168]: E1008 19:50:08.721861 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:08.722329 containerd[1437]: time="2024-10-08T19:50:08.722294171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:08.864917 kubelet[2168]: E1008 19:50:08.864858 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Oct 8 19:50:08.965412 kubelet[2168]: I1008 19:50:08.965372 2168 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:50:08.965732 kubelet[2168]: E1008 19:50:08.965705 2168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Oct 8 19:50:09.108899 kubelet[2168]: W1008 19:50:09.108724 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:09.108899 kubelet[2168]: E1008 19:50:09.108786 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:09.232577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914854866.mount: Deactivated successfully. Oct 8 19:50:09.238257 containerd[1437]: time="2024-10-08T19:50:09.237966861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:50:09.239754 containerd[1437]: time="2024-10-08T19:50:09.239639723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:50:09.240477 containerd[1437]: time="2024-10-08T19:50:09.240424955Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:50:09.241268 containerd[1437]: time="2024-10-08T19:50:09.241221982Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:50:09.241896 containerd[1437]: time="2024-10-08T19:50:09.241709619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 19:50:09.242702 containerd[1437]: time="2024-10-08T19:50:09.242666939Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:50:09.243379 containerd[1437]: time="2024-10-08T19:50:09.243332022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:50:09.246987 containerd[1437]: time="2024-10-08T19:50:09.246911368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:50:09.247821 containerd[1437]: time="2024-10-08T19:50:09.247795159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.409431ms" Oct 8 19:50:09.249494 containerd[1437]: time="2024-10-08T19:50:09.249316484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.988781ms" Oct 8 19:50:09.251158 containerd[1437]: time="2024-10-08T19:50:09.250959679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.743359ms" Oct 8 19:50:09.345390 kubelet[2168]: W1008 19:50:09.341224 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:09.345390 kubelet[2168]: E1008 19:50:09.341302 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:09.356724 kubelet[2168]: W1008 19:50:09.356343 2168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:09.356724 kubelet[2168]: E1008 19:50:09.356388 2168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.410746038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.411043554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.411058868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.411068464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.410819008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.410907171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.410928602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:09.411359 containerd[1437]: time="2024-10-08T19:50:09.410943356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:09.411622 containerd[1437]: time="2024-10-08T19:50:09.411180977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:09.411622 containerd[1437]: time="2024-10-08T19:50:09.411360302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:09.411622 containerd[1437]: time="2024-10-08T19:50:09.411377815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:09.411622 containerd[1437]: time="2024-10-08T19:50:09.411390489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:09.431189 systemd[1]: Started cri-containerd-99da08ebee1ef4951351f0a75456a75cbc41952612c3e0a9ea13d996ae4f1aa5.scope - libcontainer container 99da08ebee1ef4951351f0a75456a75cbc41952612c3e0a9ea13d996ae4f1aa5. Oct 8 19:50:09.435502 systemd[1]: Started cri-containerd-6df8a6ab1cd870d305817c3354bcded396455d4205a292a44e01a3661899cad2.scope - libcontainer container 6df8a6ab1cd870d305817c3354bcded396455d4205a292a44e01a3661899cad2. Oct 8 19:50:09.436748 systemd[1]: Started cri-containerd-d6245c7f3c9652e01d88ea14e80268bf2cf9b2335abfae7e8c2f08f16b02975e.scope - libcontainer container d6245c7f3c9652e01d88ea14e80268bf2cf9b2335abfae7e8c2f08f16b02975e. Oct 8 19:50:09.468545 containerd[1437]: time="2024-10-08T19:50:09.468502336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"99da08ebee1ef4951351f0a75456a75cbc41952612c3e0a9ea13d996ae4f1aa5\"" Oct 8 19:50:09.471507 kubelet[2168]: E1008 19:50:09.471389 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:09.474345 containerd[1437]: time="2024-10-08T19:50:09.474275247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d355b4785f09447f165c1c39bcc3708d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6df8a6ab1cd870d305817c3354bcded396455d4205a292a44e01a3661899cad2\"" Oct 8 19:50:09.475370 kubelet[2168]: E1008 19:50:09.475347 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:09.475582 containerd[1437]: time="2024-10-08T19:50:09.475550235Z" level=info msg="CreateContainer within sandbox \"99da08ebee1ef4951351f0a75456a75cbc41952612c3e0a9ea13d996ae4f1aa5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:50:09.477236 containerd[1437]: time="2024-10-08T19:50:09.477088513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6245c7f3c9652e01d88ea14e80268bf2cf9b2335abfae7e8c2f08f16b02975e\"" Oct 8 19:50:09.477561 containerd[1437]: time="2024-10-08T19:50:09.477461357Z" level=info msg="CreateContainer within sandbox \"6df8a6ab1cd870d305817c3354bcded396455d4205a292a44e01a3661899cad2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:50:09.478101 kubelet[2168]: E1008 19:50:09.478041 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:09.479905 containerd[1437]: time="2024-10-08T19:50:09.479813696Z" level=info msg="CreateContainer within sandbox \"d6245c7f3c9652e01d88ea14e80268bf2cf9b2335abfae7e8c2f08f16b02975e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:50:09.491812 containerd[1437]: time="2024-10-08T19:50:09.491762709Z" level=info msg="CreateContainer within sandbox \"6df8a6ab1cd870d305817c3354bcded396455d4205a292a44e01a3661899cad2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"873d94eb4bc5cdad87bbff4623ffdbc896b16e7fb75bcf7cf03ba5620228c7c8\"" Oct 8 19:50:09.492368 containerd[1437]: time="2024-10-08T19:50:09.492342587Z" level=info msg="CreateContainer within sandbox \"99da08ebee1ef4951351f0a75456a75cbc41952612c3e0a9ea13d996ae4f1aa5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4612c37884ce5ed12b165386ddcd3c5eb76034f8df470d55eeb933d059928430\"" Oct 8 19:50:09.492487 containerd[1437]: time="2024-10-08T19:50:09.492460898Z" level=info msg="StartContainer for \"873d94eb4bc5cdad87bbff4623ffdbc896b16e7fb75bcf7cf03ba5620228c7c8\"" Oct 8 19:50:09.492684 containerd[1437]: time="2024-10-08T19:50:09.492660734Z" level=info msg="StartContainer for \"4612c37884ce5ed12b165386ddcd3c5eb76034f8df470d55eeb933d059928430\"" Oct 8 19:50:09.500601 containerd[1437]: time="2024-10-08T19:50:09.500565356Z" level=info msg="CreateContainer within sandbox \"d6245c7f3c9652e01d88ea14e80268bf2cf9b2335abfae7e8c2f08f16b02975e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c50f3892c9c8338c5ee52bc10a395e01cdc676c3fbb3fbc68bf2c4d66e1c1cd9\"" Oct 8 19:50:09.501634 containerd[1437]: time="2024-10-08T19:50:09.501190895Z" level=info msg="StartContainer for \"c50f3892c9c8338c5ee52bc10a395e01cdc676c3fbb3fbc68bf2c4d66e1c1cd9\"" Oct 8 19:50:09.519153 systemd[1]: Started cri-containerd-873d94eb4bc5cdad87bbff4623ffdbc896b16e7fb75bcf7cf03ba5620228c7c8.scope - libcontainer container 873d94eb4bc5cdad87bbff4623ffdbc896b16e7fb75bcf7cf03ba5620228c7c8. Oct 8 19:50:09.522537 systemd[1]: Started cri-containerd-4612c37884ce5ed12b165386ddcd3c5eb76034f8df470d55eeb933d059928430.scope - libcontainer container 4612c37884ce5ed12b165386ddcd3c5eb76034f8df470d55eeb933d059928430. Oct 8 19:50:09.524442 systemd[1]: Started cri-containerd-c50f3892c9c8338c5ee52bc10a395e01cdc676c3fbb3fbc68bf2c4d66e1c1cd9.scope - libcontainer container c50f3892c9c8338c5ee52bc10a395e01cdc676c3fbb3fbc68bf2c4d66e1c1cd9. Oct 8 19:50:09.554051 containerd[1437]: time="2024-10-08T19:50:09.553944880Z" level=info msg="StartContainer for \"873d94eb4bc5cdad87bbff4623ffdbc896b16e7fb75bcf7cf03ba5620228c7c8\" returns successfully" Oct 8 19:50:09.575830 containerd[1437]: time="2024-10-08T19:50:09.575783886Z" level=info msg="StartContainer for \"c50f3892c9c8338c5ee52bc10a395e01cdc676c3fbb3fbc68bf2c4d66e1c1cd9\" returns successfully" Oct 8 19:50:09.576028 containerd[1437]: time="2024-10-08T19:50:09.575956094Z" level=info msg="StartContainer for \"4612c37884ce5ed12b165386ddcd3c5eb76034f8df470d55eeb933d059928430\" returns successfully" Oct 8 19:50:09.667117 kubelet[2168]: E1008 19:50:09.666447 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Oct 8 19:50:09.767764 kubelet[2168]: I1008 19:50:09.767640 2168 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:50:10.285741 kubelet[2168]: E1008 19:50:10.285708 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:10.288039 kubelet[2168]: E1008 19:50:10.287771 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:10.289838 kubelet[2168]: E1008 19:50:10.289787 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:11.291790 kubelet[2168]: E1008 19:50:11.291725 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:11.297989 kubelet[2168]: E1008 19:50:11.294569 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:11.307720 kubelet[2168]: E1008 19:50:11.307696 2168 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:50:11.393831 kubelet[2168]: I1008 19:50:11.393777 2168 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:50:12.254554 kubelet[2168]: I1008 19:50:12.254506 2168 apiserver.go:52] "Watching apiserver" Oct 8 19:50:12.260189 kubelet[2168]: I1008 19:50:12.260138 2168 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:50:13.202056 systemd[1]: Reloading requested from client PID 2450 ('systemctl') (unit session-7.scope)... Oct 8 19:50:13.202074 systemd[1]: Reloading... Oct 8 19:50:13.269013 zram_generator::config[2487]: No configuration found. Oct 8 19:50:13.430654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:50:13.496164 systemd[1]: Reloading finished in 293 ms. Oct 8 19:50:13.534298 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:13.540746 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:50:13.540962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:13.553196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:50:13.650223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:50:13.654094 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:50:13.695794 kubelet[2529]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:50:13.695794 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:50:13.695794 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:50:13.696154 kubelet[2529]: I1008 19:50:13.695828 2529 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:50:13.699919 kubelet[2529]: I1008 19:50:13.699867 2529 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:50:13.699919 kubelet[2529]: I1008 19:50:13.699893 2529 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:50:13.700185 kubelet[2529]: I1008 19:50:13.700160 2529 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:50:13.701543 kubelet[2529]: I1008 19:50:13.701521 2529 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:50:13.702902 kubelet[2529]: I1008 19:50:13.702879 2529 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:50:13.708325 kubelet[2529]: I1008 19:50:13.708297 2529 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:50:13.708510 kubelet[2529]: I1008 19:50:13.708485 2529 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:50:13.708675 kubelet[2529]: I1008 19:50:13.708511 2529 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:50:13.708675 kubelet[2529]: I1008 19:50:13.708671 2529 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:50:13.708675 kubelet[2529]: I1008 19:50:13.708679 2529 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:50:13.708805 kubelet[2529]: I1008 19:50:13.708710 2529 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:50:13.708805 kubelet[2529]: I1008 19:50:13.708798 2529 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:50:13.708850 kubelet[2529]: I1008 19:50:13.708808 2529 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:50:13.708850 kubelet[2529]: I1008 19:50:13.708834 2529 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:50:13.708850 kubelet[2529]: I1008 19:50:13.708849 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:50:13.710516 kubelet[2529]: I1008 19:50:13.710102 2529 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:50:13.710516 kubelet[2529]: I1008 19:50:13.710288 2529 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:50:13.710876 kubelet[2529]: I1008 19:50:13.710858 2529 server.go:1264] "Started kubelet" Oct 8 19:50:13.712566 kubelet[2529]: I1008 19:50:13.712508 2529 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:50:13.712772 kubelet[2529]: I1008 19:50:13.712748 2529 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:50:13.712819 kubelet[2529]: I1008 19:50:13.712788 2529 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:50:13.713017 kubelet[2529]: I1008 19:50:13.713004 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:50:13.715379 kubelet[2529]: I1008 19:50:13.715351 2529 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:50:13.715829 kubelet[2529]: I1008 19:50:13.715804 2529 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:50:13.716072 kubelet[2529]: I1008 19:50:13.716052 2529 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:50:13.717034 kubelet[2529]: I1008 19:50:13.717012 2529 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:50:13.721162 kubelet[2529]: I1008 19:50:13.719596 2529 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:50:13.721267 kubelet[2529]: I1008 19:50:13.721240 2529 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:50:13.722186 kubelet[2529]: I1008 19:50:13.722152 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:50:13.728149 kubelet[2529]: I1008 19:50:13.728123 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:50:13.728349 kubelet[2529]: I1008 19:50:13.728159 2529 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:50:13.728349 kubelet[2529]: I1008 19:50:13.728179 2529 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:50:13.728349 kubelet[2529]: E1008 19:50:13.728219 2529 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:50:13.737180 kubelet[2529]: E1008 19:50:13.737152 2529 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:50:13.738443 kubelet[2529]: I1008 19:50:13.738405 2529 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:50:13.769297 kubelet[2529]: I1008 19:50:13.769195 2529 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:50:13.769297 kubelet[2529]: I1008 19:50:13.769217 2529 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:50:13.769297 kubelet[2529]: I1008 19:50:13.769237 2529 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:50:13.769448 kubelet[2529]: I1008 19:50:13.769382 2529 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:50:13.769448 kubelet[2529]: I1008 19:50:13.769393 2529 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:50:13.769448 kubelet[2529]: I1008 19:50:13.769410 2529 policy_none.go:49] "None policy: Start" Oct 8 19:50:13.770780 kubelet[2529]: I1008 19:50:13.770488 2529 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:50:13.770780 kubelet[2529]: I1008 19:50:13.770515 2529 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:50:13.770780 kubelet[2529]: I1008 19:50:13.770637 2529 state_mem.go:75] "Updated machine memory state" Oct 8 19:50:13.774652 kubelet[2529]: I1008 19:50:13.774624 2529 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:50:13.775058 kubelet[2529]: I1008 19:50:13.775011 2529 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:50:13.775162 kubelet[2529]: I1008 19:50:13.775148 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:50:13.818364 kubelet[2529]: I1008 19:50:13.818337 2529 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:50:13.824549 kubelet[2529]: I1008 19:50:13.824506 2529 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:50:13.824661 kubelet[2529]: I1008 19:50:13.824621 2529 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:50:13.828948 kubelet[2529]: I1008 19:50:13.828886 2529 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:50:13.829043 kubelet[2529]: I1008 19:50:13.829028 2529 topology_manager.go:215] "Topology Admit Handler" podUID="d355b4785f09447f165c1c39bcc3708d" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:50:13.829443 kubelet[2529]: I1008 19:50:13.829184 2529 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:50:13.917213 kubelet[2529]: I1008 19:50:13.917161 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:13.917213 kubelet[2529]: I1008 19:50:13.917206 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:50:13.917371 kubelet[2529]: I1008 19:50:13.917224 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d355b4785f09447f165c1c39bcc3708d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d355b4785f09447f165c1c39bcc3708d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:13.917371 kubelet[2529]: I1008 19:50:13.917243 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d355b4785f09447f165c1c39bcc3708d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d355b4785f09447f165c1c39bcc3708d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:13.917371 kubelet[2529]: I1008 19:50:13.917258 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:13.917371 kubelet[2529]: I1008 19:50:13.917273 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:13.917371 kubelet[2529]: I1008 19:50:13.917289 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:13.917489 kubelet[2529]: I1008 19:50:13.917306 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d355b4785f09447f165c1c39bcc3708d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d355b4785f09447f165c1c39bcc3708d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:13.917489 kubelet[2529]: I1008 19:50:13.917349 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:50:14.159765 kubelet[2529]: E1008 19:50:14.159485 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:14.159765 kubelet[2529]: E1008 19:50:14.159598 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:14.160489 kubelet[2529]: E1008 19:50:14.160466 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:14.201532 sudo[2566]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:50:14.201773 sudo[2566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 8 19:50:14.634553 sudo[2566]: pam_unix(sudo:session): session closed for user root Oct 8 19:50:14.709656 kubelet[2529]: I1008 19:50:14.709504 2529 apiserver.go:52] "Watching apiserver" Oct 8 19:50:14.716049 kubelet[2529]: I1008 19:50:14.716024 2529 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:50:14.753917 kubelet[2529]: E1008 19:50:14.753882 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:14.761634 kubelet[2529]: E1008 19:50:14.761359 2529 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 8 19:50:14.761925 kubelet[2529]: E1008 19:50:14.761847 2529 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:50:14.762097 kubelet[2529]: E1008 19:50:14.762033 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:14.766811 kubelet[2529]: E1008 19:50:14.762255 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:14.781221 kubelet[2529]: I1008 19:50:14.780802 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.780771321 podStartE2EDuration="1.780771321s" podCreationTimestamp="2024-10-08 19:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:50:14.780637363 +0000 UTC m=+1.123556290" watchObservedRunningTime="2024-10-08 19:50:14.780771321 +0000 UTC m=+1.123690248" Oct 8 19:50:14.796933 kubelet[2529]: I1008 19:50:14.796876 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.796859509 podStartE2EDuration="1.796859509s" podCreationTimestamp="2024-10-08 19:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:50:14.795730447 +0000 UTC m=+1.138649374" watchObservedRunningTime="2024-10-08 19:50:14.796859509 +0000 UTC m=+1.139778436" Oct 8 19:50:14.797184 kubelet[2529]: I1008 19:50:14.796959 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7969555879999999 podStartE2EDuration="1.796955588s" podCreationTimestamp="2024-10-08 19:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:50:14.789074391 +0000 UTC m=+1.131993318" watchObservedRunningTime="2024-10-08 19:50:14.796955588 +0000 UTC m=+1.139874515" Oct 8 19:50:15.755035 kubelet[2529]: E1008 19:50:15.754825 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:15.755380 kubelet[2529]: E1008 19:50:15.755158 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:16.364741 sudo[1619]: pam_unix(sudo:session): session closed for user root Oct 8 19:50:16.366149 sshd[1616]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:16.369877 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:45466.service: Deactivated successfully. Oct 8 19:50:16.371889 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:50:16.372189 systemd[1]: session-7.scope: Consumed 7.994s CPU time, 136.2M memory peak, 0B memory swap peak. Oct 8 19:50:16.372870 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:50:16.373938 systemd-logind[1425]: Removed session 7. Oct 8 19:50:16.756352 kubelet[2529]: E1008 19:50:16.756247 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:17.197150 kubelet[2529]: E1008 19:50:17.197013 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:23.296890 kubelet[2529]: E1008 19:50:23.296855 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:23.768831 kubelet[2529]: E1008 19:50:23.768719 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:25.132168 update_engine[1427]: I1008 19:50:25.132107 1427 update_attempter.cc:509] Updating boot flags... Oct 8 19:50:25.166067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2615) Oct 8 19:50:25.206050 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2614) Oct 8 19:50:25.221015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2614) Oct 8 19:50:25.865485 kubelet[2529]: E1008 19:50:25.865426 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:27.205462 kubelet[2529]: E1008 19:50:27.204401 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:27.852401 kubelet[2529]: I1008 19:50:27.852360 2529 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:50:27.852725 containerd[1437]: time="2024-10-08T19:50:27.852678556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:50:27.853019 kubelet[2529]: I1008 19:50:27.852850 2529 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:50:28.611309 kubelet[2529]: I1008 19:50:28.609947 2529 topology_manager.go:215] "Topology Admit Handler" podUID="7759cf3a-9d6b-41d5-8ba6-1f55b1343747" podNamespace="kube-system" podName="kube-proxy-s6ztf" Oct 8 19:50:28.615865 kubelet[2529]: I1008 19:50:28.615839 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7759cf3a-9d6b-41d5-8ba6-1f55b1343747-kube-proxy\") pod \"kube-proxy-s6ztf\" (UID: \"7759cf3a-9d6b-41d5-8ba6-1f55b1343747\") " pod="kube-system/kube-proxy-s6ztf" Oct 8 19:50:28.616045 kubelet[2529]: I1008 19:50:28.616032 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7759cf3a-9d6b-41d5-8ba6-1f55b1343747-xtables-lock\") pod \"kube-proxy-s6ztf\" (UID: \"7759cf3a-9d6b-41d5-8ba6-1f55b1343747\") " pod="kube-system/kube-proxy-s6ztf" Oct 8 19:50:28.616159 kubelet[2529]: I1008 19:50:28.616146 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7759cf3a-9d6b-41d5-8ba6-1f55b1343747-lib-modules\") pod \"kube-proxy-s6ztf\" (UID: \"7759cf3a-9d6b-41d5-8ba6-1f55b1343747\") " pod="kube-system/kube-proxy-s6ztf" Oct 8 19:50:28.616277 kubelet[2529]: I1008 19:50:28.616262 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbl5\" (UniqueName: \"kubernetes.io/projected/7759cf3a-9d6b-41d5-8ba6-1f55b1343747-kube-api-access-lhbl5\") pod \"kube-proxy-s6ztf\" (UID: \"7759cf3a-9d6b-41d5-8ba6-1f55b1343747\") " pod="kube-system/kube-proxy-s6ztf" Oct 8 19:50:28.619061 kubelet[2529]: I1008 19:50:28.617998 2529 topology_manager.go:215] "Topology Admit Handler" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" podNamespace="kube-system" podName="cilium-59zk8" Oct 8 19:50:28.621204 systemd[1]: Created slice kubepods-besteffort-pod7759cf3a_9d6b_41d5_8ba6_1f55b1343747.slice - libcontainer container kubepods-besteffort-pod7759cf3a_9d6b_41d5_8ba6_1f55b1343747.slice. Oct 8 19:50:28.642769 systemd[1]: Created slice kubepods-burstable-pod2e78eb9a_91c1_4cff_b7a2_30f04e8f7c02.slice - libcontainer container kubepods-burstable-pod2e78eb9a_91c1_4cff_b7a2_30f04e8f7c02.slice. Oct 8 19:50:28.718145 kubelet[2529]: I1008 19:50:28.716724 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-kernel\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718145 kubelet[2529]: I1008 19:50:28.716767 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hubble-tls\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718145 kubelet[2529]: I1008 19:50:28.716787 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-config-path\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718145 kubelet[2529]: I1008 19:50:28.716814 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54g44\" (UniqueName: \"kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-kube-api-access-54g44\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718145 kubelet[2529]: I1008 19:50:28.716832 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-bpf-maps\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718145 kubelet[2529]: I1008 19:50:28.716846 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cni-path\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718486 kubelet[2529]: I1008 19:50:28.716861 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-lib-modules\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718486 kubelet[2529]: I1008 19:50:28.716880 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-cgroup\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718486 kubelet[2529]: I1008 19:50:28.716895 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-etc-cni-netd\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718486 kubelet[2529]: I1008 19:50:28.716917 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hostproc\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718486 kubelet[2529]: I1008 19:50:28.716933 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-clustermesh-secrets\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718486 kubelet[2529]: I1008 19:50:28.716948 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-net\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718660 kubelet[2529]: I1008 19:50:28.716962 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-run\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.718660 kubelet[2529]: I1008 19:50:28.717002 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-xtables-lock\") pod \"cilium-59zk8\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " pod="kube-system/cilium-59zk8" Oct 8 19:50:28.852849 kubelet[2529]: I1008 19:50:28.852217 2529 topology_manager.go:215] "Topology Admit Handler" podUID="cb28674c-61d8-4f96-9e0f-06c2286504ef" podNamespace="kube-system" podName="cilium-operator-599987898-plwcr" Oct 8 19:50:28.865952 systemd[1]: Created slice kubepods-besteffort-podcb28674c_61d8_4f96_9e0f_06c2286504ef.slice - libcontainer container kubepods-besteffort-podcb28674c_61d8_4f96_9e0f_06c2286504ef.slice. Oct 8 19:50:28.918176 kubelet[2529]: I1008 19:50:28.918120 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb28674c-61d8-4f96-9e0f-06c2286504ef-cilium-config-path\") pod \"cilium-operator-599987898-plwcr\" (UID: \"cb28674c-61d8-4f96-9e0f-06c2286504ef\") " pod="kube-system/cilium-operator-599987898-plwcr" Oct 8 19:50:28.918176 kubelet[2529]: I1008 19:50:28.918166 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc76\" (UniqueName: \"kubernetes.io/projected/cb28674c-61d8-4f96-9e0f-06c2286504ef-kube-api-access-5qc76\") pod \"cilium-operator-599987898-plwcr\" (UID: \"cb28674c-61d8-4f96-9e0f-06c2286504ef\") " pod="kube-system/cilium-operator-599987898-plwcr" Oct 8 19:50:28.936097 kubelet[2529]: E1008 19:50:28.936063 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:28.936687 containerd[1437]: time="2024-10-08T19:50:28.936648116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6ztf,Uid:7759cf3a-9d6b-41d5-8ba6-1f55b1343747,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:28.947015 kubelet[2529]: E1008 19:50:28.945551 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:28.947723 containerd[1437]: time="2024-10-08T19:50:28.947369916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59zk8,Uid:2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:28.957170 containerd[1437]: time="2024-10-08T19:50:28.957097004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:28.957302 containerd[1437]: time="2024-10-08T19:50:28.957144884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:28.957302 containerd[1437]: time="2024-10-08T19:50:28.957168004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:28.957302 containerd[1437]: time="2024-10-08T19:50:28.957178924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:28.967818 containerd[1437]: time="2024-10-08T19:50:28.967690165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:28.967818 containerd[1437]: time="2024-10-08T19:50:28.967755845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:28.967818 containerd[1437]: time="2024-10-08T19:50:28.967769405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:28.967818 containerd[1437]: time="2024-10-08T19:50:28.967778725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:28.977889 systemd[1]: Started cri-containerd-283f450032383f99475b06a58c6c3c89ec101cde7fb4450d2cffb1ca4a6a6a8f.scope - libcontainer container 283f450032383f99475b06a58c6c3c89ec101cde7fb4450d2cffb1ca4a6a6a8f. Oct 8 19:50:28.980503 systemd[1]: Started cri-containerd-9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36.scope - libcontainer container 9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36. Oct 8 19:50:29.006535 containerd[1437]: time="2024-10-08T19:50:29.006459199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-59zk8,Uid:2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02,Namespace:kube-system,Attempt:0,} returns sandbox id \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\"" Oct 8 19:50:29.007865 kubelet[2529]: E1008 19:50:29.007839 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:29.008293 containerd[1437]: time="2024-10-08T19:50:29.007847869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6ztf,Uid:7759cf3a-9d6b-41d5-8ba6-1f55b1343747,Namespace:kube-system,Attempt:0,} returns sandbox id \"283f450032383f99475b06a58c6c3c89ec101cde7fb4450d2cffb1ca4a6a6a8f\"" Oct 8 19:50:29.009264 kubelet[2529]: E1008 19:50:29.009130 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:29.009972 containerd[1437]: time="2024-10-08T19:50:29.009942014Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:50:29.011587 containerd[1437]: time="2024-10-08T19:50:29.011329924Z" level=info msg="CreateContainer within sandbox \"283f450032383f99475b06a58c6c3c89ec101cde7fb4450d2cffb1ca4a6a6a8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:50:29.024918 containerd[1437]: time="2024-10-08T19:50:29.024862108Z" level=info msg="CreateContainer within sandbox \"283f450032383f99475b06a58c6c3c89ec101cde7fb4450d2cffb1ca4a6a6a8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0323c704b82d6b6764b3564c6d1946192fadfd2ba77b33e4227277ea0d43df8f\"" Oct 8 19:50:29.025399 containerd[1437]: time="2024-10-08T19:50:29.025354665Z" level=info msg="StartContainer for \"0323c704b82d6b6764b3564c6d1946192fadfd2ba77b33e4227277ea0d43df8f\"" Oct 8 19:50:29.051178 systemd[1]: Started cri-containerd-0323c704b82d6b6764b3564c6d1946192fadfd2ba77b33e4227277ea0d43df8f.scope - libcontainer container 0323c704b82d6b6764b3564c6d1946192fadfd2ba77b33e4227277ea0d43df8f. Oct 8 19:50:29.081155 containerd[1437]: time="2024-10-08T19:50:29.079403722Z" level=info msg="StartContainer for \"0323c704b82d6b6764b3564c6d1946192fadfd2ba77b33e4227277ea0d43df8f\" returns successfully" Oct 8 19:50:29.175736 kubelet[2529]: E1008 19:50:29.172298 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:29.176427 containerd[1437]: time="2024-10-08T19:50:29.176061196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-plwcr,Uid:cb28674c-61d8-4f96-9e0f-06c2286504ef,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:29.197109 containerd[1437]: time="2024-10-08T19:50:29.196521931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:29.197254 containerd[1437]: time="2024-10-08T19:50:29.197097167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:29.197254 containerd[1437]: time="2024-10-08T19:50:29.197119407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:29.197254 containerd[1437]: time="2024-10-08T19:50:29.197129767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:29.222214 systemd[1]: Started cri-containerd-57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496.scope - libcontainer container 57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496. Oct 8 19:50:29.253950 containerd[1437]: time="2024-10-08T19:50:29.253909444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-plwcr,Uid:cb28674c-61d8-4f96-9e0f-06c2286504ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496\"" Oct 8 19:50:29.254906 kubelet[2529]: E1008 19:50:29.254857 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:29.780701 kubelet[2529]: E1008 19:50:29.780672 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:35.025614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410205069.mount: Deactivated successfully. Oct 8 19:50:36.272987 containerd[1437]: time="2024-10-08T19:50:36.272908733Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:36.273991 containerd[1437]: time="2024-10-08T19:50:36.273921888Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651582" Oct 8 19:50:36.275044 containerd[1437]: time="2024-10-08T19:50:36.275009922Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:36.276607 containerd[1437]: time="2024-10-08T19:50:36.276568154Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.26658786s" Oct 8 19:50:36.276654 containerd[1437]: time="2024-10-08T19:50:36.276605034Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 8 19:50:36.283292 containerd[1437]: time="2024-10-08T19:50:36.283209080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:50:36.284881 containerd[1437]: time="2024-10-08T19:50:36.284844672Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:50:36.310995 containerd[1437]: time="2024-10-08T19:50:36.310900176Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\"" Oct 8 19:50:36.311529 containerd[1437]: time="2024-10-08T19:50:36.311480893Z" level=info msg="StartContainer for \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\"" Oct 8 19:50:36.343180 systemd[1]: Started cri-containerd-e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2.scope - libcontainer container e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2. Oct 8 19:50:36.434651 containerd[1437]: time="2024-10-08T19:50:36.434601775Z" level=info msg="StartContainer for \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\" returns successfully" Oct 8 19:50:36.466233 systemd[1]: cri-containerd-e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2.scope: Deactivated successfully. Oct 8 19:50:36.487747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2-rootfs.mount: Deactivated successfully. Oct 8 19:50:36.624866 containerd[1437]: time="2024-10-08T19:50:36.621368568Z" level=info msg="shim disconnected" id=e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2 namespace=k8s.io Oct 8 19:50:36.624866 containerd[1437]: time="2024-10-08T19:50:36.624700590Z" level=warning msg="cleaning up after shim disconnected" id=e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2 namespace=k8s.io Oct 8 19:50:36.624866 containerd[1437]: time="2024-10-08T19:50:36.624714710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:50:36.798289 kubelet[2529]: E1008 19:50:36.798256 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:36.802293 containerd[1437]: time="2024-10-08T19:50:36.802244470Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:50:36.826316 containerd[1437]: time="2024-10-08T19:50:36.826256986Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\"" Oct 8 19:50:36.826740 containerd[1437]: time="2024-10-08T19:50:36.826709023Z" level=info msg="StartContainer for \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\"" Oct 8 19:50:36.829670 kubelet[2529]: I1008 19:50:36.829558 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s6ztf" podStartSLOduration=8.829542169 podStartE2EDuration="8.829542169s" podCreationTimestamp="2024-10-08 19:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:50:29.788810812 +0000 UTC m=+16.131729699" watchObservedRunningTime="2024-10-08 19:50:36.829542169 +0000 UTC m=+23.172461096" Oct 8 19:50:36.857147 systemd[1]: Started cri-containerd-2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339.scope - libcontainer container 2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339. Oct 8 19:50:36.893601 containerd[1437]: time="2024-10-08T19:50:36.893448198Z" level=info msg="StartContainer for \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\" returns successfully" Oct 8 19:50:36.906812 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:50:36.907466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:50:36.907589 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:50:36.916234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:50:36.916560 systemd[1]: cri-containerd-2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339.scope: Deactivated successfully. Oct 8 19:50:36.939381 containerd[1437]: time="2024-10-08T19:50:36.939140281Z" level=info msg="shim disconnected" id=2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339 namespace=k8s.io Oct 8 19:50:36.939381 containerd[1437]: time="2024-10-08T19:50:36.939203480Z" level=warning msg="cleaning up after shim disconnected" id=2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339 namespace=k8s.io Oct 8 19:50:36.939381 containerd[1437]: time="2024-10-08T19:50:36.939213240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:50:36.956622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:50:37.645811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008300667.mount: Deactivated successfully. Oct 8 19:50:37.801147 kubelet[2529]: E1008 19:50:37.800882 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:37.803899 containerd[1437]: time="2024-10-08T19:50:37.803835127Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:50:37.826756 containerd[1437]: time="2024-10-08T19:50:37.826701653Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\"" Oct 8 19:50:37.827364 containerd[1437]: time="2024-10-08T19:50:37.827314050Z" level=info msg="StartContainer for \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\"" Oct 8 19:50:37.856532 systemd[1]: Started cri-containerd-2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d.scope - libcontainer container 2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d. Oct 8 19:50:37.885814 containerd[1437]: time="2024-10-08T19:50:37.885766560Z" level=info msg="StartContainer for \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\" returns successfully" Oct 8 19:50:37.896172 systemd[1]: cri-containerd-2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d.scope: Deactivated successfully. Oct 8 19:50:37.964312 containerd[1437]: time="2024-10-08T19:50:37.964254929Z" level=info msg="shim disconnected" id=2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d namespace=k8s.io Oct 8 19:50:37.964526 containerd[1437]: time="2024-10-08T19:50:37.964509328Z" level=warning msg="cleaning up after shim disconnected" id=2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d namespace=k8s.io Oct 8 19:50:37.964581 containerd[1437]: time="2024-10-08T19:50:37.964569328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:50:38.172811 containerd[1437]: time="2024-10-08T19:50:38.172664126Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:38.173808 containerd[1437]: time="2024-10-08T19:50:38.173781801Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138290" Oct 8 19:50:38.176130 containerd[1437]: time="2024-10-08T19:50:38.176097950Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:50:38.177485 containerd[1437]: time="2024-10-08T19:50:38.177349424Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.894015265s" Oct 8 19:50:38.177485 containerd[1437]: time="2024-10-08T19:50:38.177404984Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 8 19:50:38.181004 containerd[1437]: time="2024-10-08T19:50:38.180933047Z" level=info msg="CreateContainer within sandbox \"57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:50:38.190086 containerd[1437]: time="2024-10-08T19:50:38.190029363Z" level=info msg="CreateContainer within sandbox \"57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\"" Oct 8 19:50:38.191016 containerd[1437]: time="2024-10-08T19:50:38.190488961Z" level=info msg="StartContainer for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\"" Oct 8 19:50:38.218177 systemd[1]: Started cri-containerd-6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491.scope - libcontainer container 6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491. Oct 8 19:50:38.250344 containerd[1437]: time="2024-10-08T19:50:38.250221316Z" level=info msg="StartContainer for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" returns successfully" Oct 8 19:50:38.815930 kubelet[2529]: E1008 19:50:38.815888 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:38.830008 kubelet[2529]: E1008 19:50:38.823285 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:38.830165 containerd[1437]: time="2024-10-08T19:50:38.823723616Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:50:38.862531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269490255.mount: Deactivated successfully. Oct 8 19:50:38.866944 containerd[1437]: time="2024-10-08T19:50:38.866885930Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\"" Oct 8 19:50:38.869128 containerd[1437]: time="2024-10-08T19:50:38.868354363Z" level=info msg="StartContainer for \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\"" Oct 8 19:50:38.910325 systemd[1]: Started cri-containerd-8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259.scope - libcontainer container 8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259. Oct 8 19:50:38.938301 systemd[1]: cri-containerd-8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259.scope: Deactivated successfully. Oct 8 19:50:38.950146 containerd[1437]: time="2024-10-08T19:50:38.950090092Z" level=info msg="StartContainer for \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\" returns successfully" Oct 8 19:50:39.037196 containerd[1437]: time="2024-10-08T19:50:39.037106283Z" level=info msg="shim disconnected" id=8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259 namespace=k8s.io Oct 8 19:50:39.037196 containerd[1437]: time="2024-10-08T19:50:39.037188443Z" level=warning msg="cleaning up after shim disconnected" id=8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259 namespace=k8s.io Oct 8 19:50:39.037196 containerd[1437]: time="2024-10-08T19:50:39.037198043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:50:39.304180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259-rootfs.mount: Deactivated successfully. Oct 8 19:50:39.827492 kubelet[2529]: E1008 19:50:39.827464 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:39.835780 kubelet[2529]: E1008 19:50:39.829325 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:39.835821 containerd[1437]: time="2024-10-08T19:50:39.832853507Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:50:39.859477 kubelet[2529]: I1008 19:50:39.856029 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-plwcr" podStartSLOduration=2.93392453 podStartE2EDuration="11.856013561s" podCreationTimestamp="2024-10-08 19:50:28 +0000 UTC" firstStartedPulling="2024-10-08 19:50:29.256073429 +0000 UTC m=+15.598992356" lastFinishedPulling="2024-10-08 19:50:38.1781625 +0000 UTC m=+24.521081387" observedRunningTime="2024-10-08 19:50:38.887856429 +0000 UTC m=+25.230775356" watchObservedRunningTime="2024-10-08 19:50:39.856013561 +0000 UTC m=+26.198932488" Oct 8 19:50:39.875584 containerd[1437]: time="2024-10-08T19:50:39.875455111Z" level=info msg="CreateContainer within sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\"" Oct 8 19:50:39.875985 containerd[1437]: time="2024-10-08T19:50:39.875910069Z" level=info msg="StartContainer for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\"" Oct 8 19:50:39.902184 systemd[1]: Started cri-containerd-59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7.scope - libcontainer container 59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7. Oct 8 19:50:39.927900 containerd[1437]: time="2024-10-08T19:50:39.926567997Z" level=info msg="StartContainer for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" returns successfully" Oct 8 19:50:40.097363 kubelet[2529]: I1008 19:50:40.097250 2529 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:50:40.141364 kubelet[2529]: I1008 19:50:40.140148 2529 topology_manager.go:215] "Topology Admit Handler" podUID="771772eb-0659-4b9d-baff-072adf713b17" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nsqwl" Oct 8 19:50:40.144985 kubelet[2529]: I1008 19:50:40.144951 2529 topology_manager.go:215] "Topology Admit Handler" podUID="02babf28-e995-4ef2-953d-11de1c391d00" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b9tdj" Oct 8 19:50:40.154232 systemd[1]: Created slice kubepods-burstable-pod771772eb_0659_4b9d_baff_072adf713b17.slice - libcontainer container kubepods-burstable-pod771772eb_0659_4b9d_baff_072adf713b17.slice. Oct 8 19:50:40.161457 systemd[1]: Created slice kubepods-burstable-pod02babf28_e995_4ef2_953d_11de1c391d00.slice - libcontainer container kubepods-burstable-pod02babf28_e995_4ef2_953d_11de1c391d00.slice. Oct 8 19:50:40.297303 kubelet[2529]: I1008 19:50:40.297254 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/771772eb-0659-4b9d-baff-072adf713b17-config-volume\") pod \"coredns-7db6d8ff4d-nsqwl\" (UID: \"771772eb-0659-4b9d-baff-072adf713b17\") " pod="kube-system/coredns-7db6d8ff4d-nsqwl" Oct 8 19:50:40.297471 kubelet[2529]: I1008 19:50:40.297316 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02babf28-e995-4ef2-953d-11de1c391d00-config-volume\") pod \"coredns-7db6d8ff4d-b9tdj\" (UID: \"02babf28-e995-4ef2-953d-11de1c391d00\") " pod="kube-system/coredns-7db6d8ff4d-b9tdj" Oct 8 19:50:40.297471 kubelet[2529]: I1008 19:50:40.297374 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdftl\" (UniqueName: \"kubernetes.io/projected/771772eb-0659-4b9d-baff-072adf713b17-kube-api-access-gdftl\") pod \"coredns-7db6d8ff4d-nsqwl\" (UID: \"771772eb-0659-4b9d-baff-072adf713b17\") " pod="kube-system/coredns-7db6d8ff4d-nsqwl" Oct 8 19:50:40.297471 kubelet[2529]: I1008 19:50:40.297393 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plw5f\" (UniqueName: \"kubernetes.io/projected/02babf28-e995-4ef2-953d-11de1c391d00-kube-api-access-plw5f\") pod \"coredns-7db6d8ff4d-b9tdj\" (UID: \"02babf28-e995-4ef2-953d-11de1c391d00\") " pod="kube-system/coredns-7db6d8ff4d-b9tdj" Oct 8 19:50:40.460569 kubelet[2529]: E1008 19:50:40.460534 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:40.462065 containerd[1437]: time="2024-10-08T19:50:40.461420218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsqwl,Uid:771772eb-0659-4b9d-baff-072adf713b17,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:40.464345 kubelet[2529]: E1008 19:50:40.464319 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:40.465164 containerd[1437]: time="2024-10-08T19:50:40.465124162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b9tdj,Uid:02babf28-e995-4ef2-953d-11de1c391d00,Namespace:kube-system,Attempt:0,}" Oct 8 19:50:40.833090 kubelet[2529]: E1008 19:50:40.832921 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:41.835904 kubelet[2529]: E1008 19:50:41.835866 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:42.199221 systemd-networkd[1372]: cilium_host: Link UP Oct 8 19:50:42.200533 systemd-networkd[1372]: cilium_net: Link UP Oct 8 19:50:42.200536 systemd-networkd[1372]: cilium_net: Gained carrier Oct 8 19:50:42.200856 systemd-networkd[1372]: cilium_host: Gained carrier Oct 8 19:50:42.290055 systemd-networkd[1372]: cilium_vxlan: Link UP Oct 8 19:50:42.290064 systemd-networkd[1372]: cilium_vxlan: Gained carrier Oct 8 19:50:42.579172 systemd-networkd[1372]: cilium_net: Gained IPv6LL Oct 8 19:50:42.584062 kernel: NET: Registered PF_ALG protocol family Oct 8 19:50:42.837142 kubelet[2529]: E1008 19:50:42.837031 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:42.980075 systemd-networkd[1372]: cilium_host: Gained IPv6LL Oct 8 19:50:43.144002 systemd-networkd[1372]: lxc_health: Link UP Oct 8 19:50:43.150087 systemd-networkd[1372]: lxc_health: Gained carrier Oct 8 19:50:43.575081 systemd-networkd[1372]: lxc44eee1d2942f: Link UP Oct 8 19:50:43.585017 kernel: eth0: renamed from tmpb5d71 Oct 8 19:50:43.589715 systemd-networkd[1372]: lxc44eee1d2942f: Gained carrier Oct 8 19:50:43.590691 systemd-networkd[1372]: lxc59ac1ed558cb: Link UP Oct 8 19:50:43.608035 kernel: eth0: renamed from tmp3b80a Oct 8 19:50:43.616513 systemd-networkd[1372]: lxc59ac1ed558cb: Gained carrier Oct 8 19:50:43.619213 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Oct 8 19:50:44.058662 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:39594.service - OpenSSH per-connection server daemon (10.0.0.1:39594). Oct 8 19:50:44.091653 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 39594 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:50:44.093119 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:50:44.097357 systemd-logind[1425]: New session 8 of user core. Oct 8 19:50:44.108165 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:50:44.240173 sshd[3755]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:44.243335 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:39594.service: Deactivated successfully. Oct 8 19:50:44.245242 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:50:44.247499 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:50:44.248613 systemd-logind[1425]: Removed session 8. Oct 8 19:50:44.955790 kubelet[2529]: E1008 19:50:44.955753 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:44.973486 kubelet[2529]: I1008 19:50:44.972873 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-59zk8" podStartSLOduration=9.698993664 podStartE2EDuration="16.972853566s" podCreationTimestamp="2024-10-08 19:50:28 +0000 UTC" firstStartedPulling="2024-10-08 19:50:29.009204699 +0000 UTC m=+15.352123626" lastFinishedPulling="2024-10-08 19:50:36.283064601 +0000 UTC m=+22.625983528" observedRunningTime="2024-10-08 19:50:40.847083993 +0000 UTC m=+27.190002920" watchObservedRunningTime="2024-10-08 19:50:44.972853566 +0000 UTC m=+31.315772493" Oct 8 19:50:45.091212 systemd-networkd[1372]: lxc44eee1d2942f: Gained IPv6LL Oct 8 19:50:45.155151 systemd-networkd[1372]: lxc_health: Gained IPv6LL Oct 8 19:50:45.347151 systemd-networkd[1372]: lxc59ac1ed558cb: Gained IPv6LL Oct 8 19:50:47.232744 containerd[1437]: time="2024-10-08T19:50:47.232429027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:47.232744 containerd[1437]: time="2024-10-08T19:50:47.232681746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:47.233261 containerd[1437]: time="2024-10-08T19:50:47.232745866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:47.233261 containerd[1437]: time="2024-10-08T19:50:47.232762026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:47.254165 systemd[1]: Started cri-containerd-3b80a560ffef48133d7116ba9b7e9b79b62481e647dd8cf67cd0fbe3ae09c372.scope - libcontainer container 3b80a560ffef48133d7116ba9b7e9b79b62481e647dd8cf67cd0fbe3ae09c372. Oct 8 19:50:47.260085 containerd[1437]: time="2024-10-08T19:50:47.259804452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:50:47.260085 containerd[1437]: time="2024-10-08T19:50:47.259868891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:47.260085 containerd[1437]: time="2024-10-08T19:50:47.259889571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:50:47.260085 containerd[1437]: time="2024-10-08T19:50:47.259903011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:50:47.268251 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:50:47.280170 systemd[1]: Started cri-containerd-b5d718ef73b56475f75969f268338db14bf047daee1c4bb863e6505e87045409.scope - libcontainer container b5d718ef73b56475f75969f268338db14bf047daee1c4bb863e6505e87045409. Oct 8 19:50:47.292113 containerd[1437]: time="2024-10-08T19:50:47.292064859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b9tdj,Uid:02babf28-e995-4ef2-953d-11de1c391d00,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b80a560ffef48133d7116ba9b7e9b79b62481e647dd8cf67cd0fbe3ae09c372\"" Oct 8 19:50:47.294185 kubelet[2529]: E1008 19:50:47.293585 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:47.295956 containerd[1437]: time="2024-10-08T19:50:47.295773766Z" level=info msg="CreateContainer within sandbox \"3b80a560ffef48133d7116ba9b7e9b79b62481e647dd8cf67cd0fbe3ae09c372\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:50:47.297323 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:50:47.311323 containerd[1437]: time="2024-10-08T19:50:47.311279272Z" level=info msg="CreateContainer within sandbox \"3b80a560ffef48133d7116ba9b7e9b79b62481e647dd8cf67cd0fbe3ae09c372\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44cdf0cd5e34271636c9800a7a576fd5cbd8887039c111f143e3804cdc06464a\"" Oct 8 19:50:47.312322 containerd[1437]: time="2024-10-08T19:50:47.312289989Z" level=info msg="StartContainer for \"44cdf0cd5e34271636c9800a7a576fd5cbd8887039c111f143e3804cdc06464a\"" Oct 8 19:50:47.318524 containerd[1437]: time="2024-10-08T19:50:47.318473167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsqwl,Uid:771772eb-0659-4b9d-baff-072adf713b17,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5d718ef73b56475f75969f268338db14bf047daee1c4bb863e6505e87045409\"" Oct 8 19:50:47.319445 kubelet[2529]: E1008 19:50:47.319423 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:47.321650 containerd[1437]: time="2024-10-08T19:50:47.321625596Z" level=info msg="CreateContainer within sandbox \"b5d718ef73b56475f75969f268338db14bf047daee1c4bb863e6505e87045409\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:50:47.330541 containerd[1437]: time="2024-10-08T19:50:47.330503765Z" level=info msg="CreateContainer within sandbox \"b5d718ef73b56475f75969f268338db14bf047daee1c4bb863e6505e87045409\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b64cda8869d88842fdd629b197a4f7d8de5c94b83b7189ff777f96074cdb2df0\"" Oct 8 19:50:47.332364 containerd[1437]: time="2024-10-08T19:50:47.332317719Z" level=info msg="StartContainer for \"b64cda8869d88842fdd629b197a4f7d8de5c94b83b7189ff777f96074cdb2df0\"" Oct 8 19:50:47.351514 systemd[1]: Started cri-containerd-44cdf0cd5e34271636c9800a7a576fd5cbd8887039c111f143e3804cdc06464a.scope - libcontainer container 44cdf0cd5e34271636c9800a7a576fd5cbd8887039c111f143e3804cdc06464a. Oct 8 19:50:47.370196 systemd[1]: Started cri-containerd-b64cda8869d88842fdd629b197a4f7d8de5c94b83b7189ff777f96074cdb2df0.scope - libcontainer container b64cda8869d88842fdd629b197a4f7d8de5c94b83b7189ff777f96074cdb2df0. Oct 8 19:50:47.395815 containerd[1437]: time="2024-10-08T19:50:47.395770378Z" level=info msg="StartContainer for \"44cdf0cd5e34271636c9800a7a576fd5cbd8887039c111f143e3804cdc06464a\" returns successfully" Oct 8 19:50:47.411200 containerd[1437]: time="2024-10-08T19:50:47.408306934Z" level=info msg="StartContainer for \"b64cda8869d88842fdd629b197a4f7d8de5c94b83b7189ff777f96074cdb2df0\" returns successfully" Oct 8 19:50:47.850070 kubelet[2529]: E1008 19:50:47.849497 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:47.850454 kubelet[2529]: E1008 19:50:47.850387 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:47.867679 kubelet[2529]: I1008 19:50:47.867615 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b9tdj" podStartSLOduration=19.866888376 podStartE2EDuration="19.866888376s" podCreationTimestamp="2024-10-08 19:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:50:47.866458978 +0000 UTC m=+34.209377945" watchObservedRunningTime="2024-10-08 19:50:47.866888376 +0000 UTC m=+34.209807263" Oct 8 19:50:47.877665 kubelet[2529]: I1008 19:50:47.877411 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nsqwl" podStartSLOduration=19.8773933 podStartE2EDuration="19.8773933s" podCreationTimestamp="2024-10-08 19:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:50:47.876071104 +0000 UTC m=+34.218990071" watchObservedRunningTime="2024-10-08 19:50:47.8773933 +0000 UTC m=+34.220312227" Oct 8 19:50:48.238055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434201972.mount: Deactivated successfully. Oct 8 19:50:48.859409 kubelet[2529]: E1008 19:50:48.859306 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:48.859409 kubelet[2529]: E1008 19:50:48.859328 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:48.885788 kubelet[2529]: I1008 19:50:48.885731 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:50:48.886501 kubelet[2529]: E1008 19:50:48.886479 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:49.260114 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:39606.service - OpenSSH per-connection server daemon (10.0.0.1:39606). Oct 8 19:50:49.300558 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 39606 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:50:49.302210 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:50:49.306008 systemd-logind[1425]: New session 9 of user core. Oct 8 19:50:49.317187 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:50:49.429053 sshd[3953]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:49.432590 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:39606.service: Deactivated successfully. Oct 8 19:50:49.434737 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:50:49.435569 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:50:49.436379 systemd-logind[1425]: Removed session 9. Oct 8 19:50:49.860706 kubelet[2529]: E1008 19:50:49.860582 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:49.862053 kubelet[2529]: E1008 19:50:49.861663 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:49.862053 kubelet[2529]: E1008 19:50:49.861719 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:50:54.443863 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:56590.service - OpenSSH per-connection server daemon (10.0.0.1:56590). Oct 8 19:50:54.481383 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 56590 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:50:54.481789 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:50:54.485393 systemd-logind[1425]: New session 10 of user core. Oct 8 19:50:54.495119 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:50:54.604045 sshd[3972]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:54.607376 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:56590.service: Deactivated successfully. Oct 8 19:50:54.610461 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:50:54.611111 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:50:54.611911 systemd-logind[1425]: Removed session 10. Oct 8 19:50:59.617725 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:56606.service - OpenSSH per-connection server daemon (10.0.0.1:56606). Oct 8 19:50:59.656089 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 56606 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:50:59.657660 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:50:59.661410 systemd-logind[1425]: New session 11 of user core. Oct 8 19:50:59.674127 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:50:59.780791 sshd[3990]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:59.794627 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:56606.service: Deactivated successfully. Oct 8 19:50:59.796262 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:50:59.797479 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:50:59.798874 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:56622.service - OpenSSH per-connection server daemon (10.0.0.1:56622). Oct 8 19:50:59.799789 systemd-logind[1425]: Removed session 11. Oct 8 19:50:59.835139 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 56622 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:50:59.836292 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:50:59.839832 systemd-logind[1425]: New session 12 of user core. Oct 8 19:50:59.846133 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:50:59.980955 sshd[4005]: pam_unix(sshd:session): session closed for user core Oct 8 19:50:59.991533 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:56622.service: Deactivated successfully. Oct 8 19:50:59.995763 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:50:59.997495 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:51:00.006289 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:56636.service - OpenSSH per-connection server daemon (10.0.0.1:56636). Oct 8 19:51:00.007318 systemd-logind[1425]: Removed session 12. Oct 8 19:51:00.039109 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 56636 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:00.040305 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:00.043744 systemd-logind[1425]: New session 13 of user core. Oct 8 19:51:00.055145 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:51:00.160738 sshd[4018]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:00.163956 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:56636.service: Deactivated successfully. Oct 8 19:51:00.166458 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:51:00.167060 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:51:00.167789 systemd-logind[1425]: Removed session 13. Oct 8 19:51:05.171595 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:58030.service - OpenSSH per-connection server daemon (10.0.0.1:58030). Oct 8 19:51:05.208965 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 58030 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:05.210193 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:05.213974 systemd-logind[1425]: New session 14 of user core. Oct 8 19:51:05.221147 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:51:05.332813 sshd[4032]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:05.335962 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:58030.service: Deactivated successfully. Oct 8 19:51:05.338341 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:51:05.340602 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:51:05.341450 systemd-logind[1425]: Removed session 14. Oct 8 19:51:10.343479 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:58042.service - OpenSSH per-connection server daemon (10.0.0.1:58042). Oct 8 19:51:10.379465 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 58042 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:10.380679 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:10.384058 systemd-logind[1425]: New session 15 of user core. Oct 8 19:51:10.391123 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:51:10.500860 sshd[4048]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:10.512501 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:58042.service: Deactivated successfully. Oct 8 19:51:10.515162 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:51:10.516442 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:51:10.525239 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:58052.service - OpenSSH per-connection server daemon (10.0.0.1:58052). Oct 8 19:51:10.526479 systemd-logind[1425]: Removed session 15. Oct 8 19:51:10.558252 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 58052 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:10.559411 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:10.562915 systemd-logind[1425]: New session 16 of user core. Oct 8 19:51:10.572105 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:51:10.813412 sshd[4063]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:10.824617 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:58052.service: Deactivated successfully. Oct 8 19:51:10.826322 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:51:10.827583 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:51:10.834267 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:58056.service - OpenSSH per-connection server daemon (10.0.0.1:58056). Oct 8 19:51:10.835139 systemd-logind[1425]: Removed session 16. Oct 8 19:51:10.866469 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 58056 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:10.867572 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:10.871588 systemd-logind[1425]: New session 17 of user core. Oct 8 19:51:10.877116 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:51:12.165658 sshd[4075]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:12.173834 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:58056.service: Deactivated successfully. Oct 8 19:51:12.175842 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:51:12.178461 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:51:12.185722 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:58058.service - OpenSSH per-connection server daemon (10.0.0.1:58058). Oct 8 19:51:12.188552 systemd-logind[1425]: Removed session 17. Oct 8 19:51:12.225268 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 58058 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:12.226920 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:12.231707 systemd-logind[1425]: New session 18 of user core. Oct 8 19:51:12.252141 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:51:12.481186 sshd[4094]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:12.492524 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:58058.service: Deactivated successfully. Oct 8 19:51:12.494241 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:51:12.495778 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:51:12.503254 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:46556.service - OpenSSH per-connection server daemon (10.0.0.1:46556). Oct 8 19:51:12.504409 systemd-logind[1425]: Removed session 18. Oct 8 19:51:12.537040 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 46556 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:12.538547 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:12.546325 systemd-logind[1425]: New session 19 of user core. Oct 8 19:51:12.551174 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:51:12.672276 sshd[4106]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:12.675521 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:46556.service: Deactivated successfully. Oct 8 19:51:12.677270 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:51:12.678721 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:51:12.679611 systemd-logind[1425]: Removed session 19. Oct 8 19:51:17.683816 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:46570.service - OpenSSH per-connection server daemon (10.0.0.1:46570). Oct 8 19:51:17.725473 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 46570 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:17.725890 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:17.732833 systemd-logind[1425]: New session 20 of user core. Oct 8 19:51:17.741519 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:51:17.865149 sshd[4125]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:17.870246 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:46570.service: Deactivated successfully. Oct 8 19:51:17.872615 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:51:17.875164 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:51:17.876213 systemd-logind[1425]: Removed session 20. Oct 8 19:51:22.877732 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:54130.service - OpenSSH per-connection server daemon (10.0.0.1:54130). Oct 8 19:51:22.929214 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 54130 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:22.931516 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:22.937628 systemd-logind[1425]: New session 21 of user core. Oct 8 19:51:22.950200 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:51:23.061208 sshd[4139]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:23.064383 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:54130.service: Deactivated successfully. Oct 8 19:51:23.066491 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:51:23.068403 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:51:23.069352 systemd-logind[1425]: Removed session 21. Oct 8 19:51:28.071479 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:54146.service - OpenSSH per-connection server daemon (10.0.0.1:54146). Oct 8 19:51:28.118586 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 54146 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:28.119046 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:28.123336 systemd-logind[1425]: New session 22 of user core. Oct 8 19:51:28.129183 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:51:28.255160 sshd[4154]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:28.269010 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:54146.service: Deactivated successfully. Oct 8 19:51:28.270703 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:51:28.272844 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:51:28.274260 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:54150.service - OpenSSH per-connection server daemon (10.0.0.1:54150). Oct 8 19:51:28.278475 systemd-logind[1425]: Removed session 22. Oct 8 19:51:28.312443 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 54150 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:28.313732 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:28.317750 systemd-logind[1425]: New session 23 of user core. Oct 8 19:51:28.328127 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:51:30.584507 containerd[1437]: time="2024-10-08T19:51:30.584426199Z" level=info msg="StopContainer for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" with timeout 30 (s)" Oct 8 19:51:30.605574 containerd[1437]: time="2024-10-08T19:51:30.605111136Z" level=info msg="Stop container \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" with signal terminated" Oct 8 19:51:30.613779 systemd[1]: cri-containerd-6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491.scope: Deactivated successfully. Oct 8 19:51:30.638863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491-rootfs.mount: Deactivated successfully. Oct 8 19:51:30.641770 containerd[1437]: time="2024-10-08T19:51:30.641703150Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:51:30.648149 containerd[1437]: time="2024-10-08T19:51:30.648101550Z" level=info msg="StopContainer for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" with timeout 2 (s)" Oct 8 19:51:30.648671 containerd[1437]: time="2024-10-08T19:51:30.648401074Z" level=info msg="Stop container \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" with signal terminated" Oct 8 19:51:30.649317 containerd[1437]: time="2024-10-08T19:51:30.649274844Z" level=info msg="shim disconnected" id=6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491 namespace=k8s.io Oct 8 19:51:30.649317 containerd[1437]: time="2024-10-08T19:51:30.649318205Z" level=warning msg="cleaning up after shim disconnected" id=6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491 namespace=k8s.io Oct 8 19:51:30.649424 containerd[1437]: time="2024-10-08T19:51:30.649327445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:30.654303 systemd-networkd[1372]: lxc_health: Link DOWN Oct 8 19:51:30.654310 systemd-networkd[1372]: lxc_health: Lost carrier Oct 8 19:51:30.678881 systemd[1]: cri-containerd-59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7.scope: Deactivated successfully. Oct 8 19:51:30.679182 systemd[1]: cri-containerd-59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7.scope: Consumed 6.482s CPU time. Oct 8 19:51:30.694288 containerd[1437]: time="2024-10-08T19:51:30.694240003Z" level=info msg="StopContainer for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" returns successfully" Oct 8 19:51:30.698969 containerd[1437]: time="2024-10-08T19:51:30.698931302Z" level=info msg="StopPodSandbox for \"57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496\"" Oct 8 19:51:30.701471 containerd[1437]: time="2024-10-08T19:51:30.699112424Z" level=info msg="Container to stop \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:51:30.702759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7-rootfs.mount: Deactivated successfully. Oct 8 19:51:30.704970 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496-shm.mount: Deactivated successfully. Oct 8 19:51:30.708515 systemd[1]: cri-containerd-57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496.scope: Deactivated successfully. Oct 8 19:51:30.710055 containerd[1437]: time="2024-10-08T19:51:30.709972679Z" level=info msg="shim disconnected" id=59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7 namespace=k8s.io Oct 8 19:51:30.710299 containerd[1437]: time="2024-10-08T19:51:30.710277243Z" level=warning msg="cleaning up after shim disconnected" id=59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7 namespace=k8s.io Oct 8 19:51:30.710379 containerd[1437]: time="2024-10-08T19:51:30.710364724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:30.727227 containerd[1437]: time="2024-10-08T19:51:30.727185853Z" level=info msg="StopContainer for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" returns successfully" Oct 8 19:51:30.727821 containerd[1437]: time="2024-10-08T19:51:30.727787700Z" level=info msg="StopPodSandbox for \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\"" Oct 8 19:51:30.727881 containerd[1437]: time="2024-10-08T19:51:30.727827981Z" level=info msg="Container to stop \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:51:30.727881 containerd[1437]: time="2024-10-08T19:51:30.727861701Z" level=info msg="Container to stop \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:51:30.727881 containerd[1437]: time="2024-10-08T19:51:30.727871141Z" level=info msg="Container to stop \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:51:30.727952 containerd[1437]: time="2024-10-08T19:51:30.727880661Z" level=info msg="Container to stop \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:51:30.727952 containerd[1437]: time="2024-10-08T19:51:30.727890621Z" level=info msg="Container to stop \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:51:30.729525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36-shm.mount: Deactivated successfully. Oct 8 19:51:30.734360 containerd[1437]: time="2024-10-08T19:51:30.734284901Z" level=info msg="shim disconnected" id=57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496 namespace=k8s.io Oct 8 19:51:30.734360 containerd[1437]: time="2024-10-08T19:51:30.734328421Z" level=warning msg="cleaning up after shim disconnected" id=57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496 namespace=k8s.io Oct 8 19:51:30.734360 containerd[1437]: time="2024-10-08T19:51:30.734336622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:30.738241 systemd[1]: cri-containerd-9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36.scope: Deactivated successfully. Oct 8 19:51:30.751322 containerd[1437]: time="2024-10-08T19:51:30.751067109Z" level=info msg="TearDown network for sandbox \"57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496\" successfully" Oct 8 19:51:30.751322 containerd[1437]: time="2024-10-08T19:51:30.751101030Z" level=info msg="StopPodSandbox for \"57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496\" returns successfully" Oct 8 19:51:30.762778 containerd[1437]: time="2024-10-08T19:51:30.762711894Z" level=info msg="shim disconnected" id=9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36 namespace=k8s.io Oct 8 19:51:30.763008 containerd[1437]: time="2024-10-08T19:51:30.762965857Z" level=warning msg="cleaning up after shim disconnected" id=9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36 namespace=k8s.io Oct 8 19:51:30.763710 containerd[1437]: time="2024-10-08T19:51:30.763553905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:30.774479 containerd[1437]: time="2024-10-08T19:51:30.774423480Z" level=info msg="TearDown network for sandbox \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" successfully" Oct 8 19:51:30.774479 containerd[1437]: time="2024-10-08T19:51:30.774462280Z" level=info msg="StopPodSandbox for \"9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36\" returns successfully" Oct 8 19:51:30.897227 kubelet[2529]: I1008 19:51:30.897191 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-config-path\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897792 kubelet[2529]: I1008 19:51:30.897238 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-xtables-lock\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897792 kubelet[2529]: I1008 19:51:30.897280 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-kernel\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897792 kubelet[2529]: I1008 19:51:30.897297 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-bpf-maps\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897792 kubelet[2529]: I1008 19:51:30.897322 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-net\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897792 kubelet[2529]: I1008 19:51:30.897338 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-run\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897792 kubelet[2529]: I1008 19:51:30.897357 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54g44\" (UniqueName: \"kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-kube-api-access-54g44\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897933 kubelet[2529]: I1008 19:51:30.897371 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-lib-modules\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897933 kubelet[2529]: I1008 19:51:30.897385 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-cgroup\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897933 kubelet[2529]: I1008 19:51:30.897399 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-etc-cni-netd\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897933 kubelet[2529]: I1008 19:51:30.897415 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hostproc\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.897933 kubelet[2529]: I1008 19:51:30.897431 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qc76\" (UniqueName: \"kubernetes.io/projected/cb28674c-61d8-4f96-9e0f-06c2286504ef-kube-api-access-5qc76\") pod \"cb28674c-61d8-4f96-9e0f-06c2286504ef\" (UID: \"cb28674c-61d8-4f96-9e0f-06c2286504ef\") " Oct 8 19:51:30.897933 kubelet[2529]: I1008 19:51:30.897448 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-clustermesh-secrets\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.898108 kubelet[2529]: I1008 19:51:30.897463 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb28674c-61d8-4f96-9e0f-06c2286504ef-cilium-config-path\") pod \"cb28674c-61d8-4f96-9e0f-06c2286504ef\" (UID: \"cb28674c-61d8-4f96-9e0f-06c2286504ef\") " Oct 8 19:51:30.898108 kubelet[2529]: I1008 19:51:30.897480 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hubble-tls\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.898108 kubelet[2529]: I1008 19:51:30.897498 2529 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cni-path\") pod \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\" (UID: \"2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02\") " Oct 8 19:51:30.902693 kubelet[2529]: I1008 19:51:30.902651 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.902753 kubelet[2529]: I1008 19:51:30.902717 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.902753 kubelet[2529]: I1008 19:51:30.902739 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.902803 kubelet[2529]: I1008 19:51:30.902754 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.902803 kubelet[2529]: I1008 19:51:30.902768 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.902983 kubelet[2529]: I1008 19:51:30.902950 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cni-path" (OuterVolumeSpecName: "cni-path") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.903022 kubelet[2529]: I1008 19:51:30.903000 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hostproc" (OuterVolumeSpecName: "hostproc") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.906486 kubelet[2529]: I1008 19:51:30.906447 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.906524 kubelet[2529]: I1008 19:51:30.906513 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.906546 kubelet[2529]: I1008 19:51:30.906534 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:51:30.912426 kubelet[2529]: I1008 19:51:30.912163 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-kube-api-access-54g44" (OuterVolumeSpecName: "kube-api-access-54g44") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "kube-api-access-54g44". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:51:30.912426 kubelet[2529]: I1008 19:51:30.912161 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb28674c-61d8-4f96-9e0f-06c2286504ef-kube-api-access-5qc76" (OuterVolumeSpecName: "kube-api-access-5qc76") pod "cb28674c-61d8-4f96-9e0f-06c2286504ef" (UID: "cb28674c-61d8-4f96-9e0f-06c2286504ef"). InnerVolumeSpecName "kube-api-access-5qc76". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:51:30.912869 kubelet[2529]: I1008 19:51:30.912825 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:51:30.913566 kubelet[2529]: I1008 19:51:30.913520 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb28674c-61d8-4f96-9e0f-06c2286504ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb28674c-61d8-4f96-9e0f-06c2286504ef" (UID: "cb28674c-61d8-4f96-9e0f-06c2286504ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:51:30.914292 kubelet[2529]: I1008 19:51:30.914256 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:51:30.914786 kubelet[2529]: I1008 19:51:30.914758 2529 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" (UID: "2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:51:30.986868 kubelet[2529]: I1008 19:51:30.986837 2529 scope.go:117] "RemoveContainer" containerID="6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491" Oct 8 19:51:30.987676 systemd[1]: Removed slice kubepods-besteffort-podcb28674c_61d8_4f96_9e0f_06c2286504ef.slice - libcontainer container kubepods-besteffort-podcb28674c_61d8_4f96_9e0f_06c2286504ef.slice. Oct 8 19:51:30.988906 containerd[1437]: time="2024-10-08T19:51:30.988873465Z" level=info msg="RemoveContainer for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\"" Oct 8 19:51:30.993600 containerd[1437]: time="2024-10-08T19:51:30.993543123Z" level=info msg="RemoveContainer for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" returns successfully" Oct 8 19:51:30.993883 kubelet[2529]: I1008 19:51:30.993787 2529 scope.go:117] "RemoveContainer" containerID="6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491" Oct 8 19:51:30.994766 systemd[1]: Removed slice kubepods-burstable-pod2e78eb9a_91c1_4cff_b7a2_30f04e8f7c02.slice - libcontainer container kubepods-burstable-pod2e78eb9a_91c1_4cff_b7a2_30f04e8f7c02.slice. Oct 8 19:51:30.994865 systemd[1]: kubepods-burstable-pod2e78eb9a_91c1_4cff_b7a2_30f04e8f7c02.slice: Consumed 6.631s CPU time. Oct 8 19:51:30.996685 containerd[1437]: time="2024-10-08T19:51:30.994000808Z" level=error msg="ContainerStatus for \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\": not found" Oct 8 19:51:30.997021 kubelet[2529]: E1008 19:51:30.996991 2529 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\": not found" containerID="6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491" Oct 8 19:51:30.997104 kubelet[2529]: I1008 19:51:30.997031 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491"} err="failed to get container status \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\": rpc error: code = NotFound desc = an error occurred when try to find container \"6efbf5ecf52a955d11ad67e058fda27f2a4fb0df2e3de4879a06cb3f15d3e491\": not found" Oct 8 19:51:30.997140 kubelet[2529]: I1008 19:51:30.997107 2529 scope.go:117] "RemoveContainer" containerID="59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7" Oct 8 19:51:30.997727 kubelet[2529]: I1008 19:51:30.997693 2529 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997727 kubelet[2529]: I1008 19:51:30.997717 2529 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997727 kubelet[2529]: I1008 19:51:30.997727 2529 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997737 2529 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-54g44\" (UniqueName: \"kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-kube-api-access-54g44\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997745 2529 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997752 2529 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997761 2529 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997769 2529 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997777 2529 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5qc76\" (UniqueName: \"kubernetes.io/projected/cb28674c-61d8-4f96-9e0f-06c2286504ef-kube-api-access-5qc76\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997785 2529 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997810 kubelet[2529]: I1008 19:51:30.997793 2529 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb28674c-61d8-4f96-9e0f-06c2286504ef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997974 kubelet[2529]: I1008 19:51:30.997801 2529 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997974 kubelet[2529]: I1008 19:51:30.997808 2529 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997974 kubelet[2529]: I1008 19:51:30.997815 2529 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997974 kubelet[2529]: I1008 19:51:30.997824 2529 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.997974 kubelet[2529]: I1008 19:51:30.997831 2529 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 19:51:30.998386 containerd[1437]: time="2024-10-08T19:51:30.998363063Z" level=info msg="RemoveContainer for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\"" Oct 8 19:51:31.001996 containerd[1437]: time="2024-10-08T19:51:31.001951867Z" level=info msg="RemoveContainer for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" returns successfully" Oct 8 19:51:31.002249 kubelet[2529]: I1008 19:51:31.002155 2529 scope.go:117] "RemoveContainer" containerID="8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259" Oct 8 19:51:31.003564 containerd[1437]: time="2024-10-08T19:51:31.003523846Z" level=info msg="RemoveContainer for \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\"" Oct 8 19:51:31.006272 containerd[1437]: time="2024-10-08T19:51:31.006221118Z" level=info msg="RemoveContainer for \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\" returns successfully" Oct 8 19:51:31.007063 kubelet[2529]: I1008 19:51:31.007036 2529 scope.go:117] "RemoveContainer" containerID="2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d" Oct 8 19:51:31.008349 containerd[1437]: time="2024-10-08T19:51:31.008319624Z" level=info msg="RemoveContainer for \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\"" Oct 8 19:51:31.011830 containerd[1437]: time="2024-10-08T19:51:31.011790265Z" level=info msg="RemoveContainer for \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\" returns successfully" Oct 8 19:51:31.012066 kubelet[2529]: I1008 19:51:31.012038 2529 scope.go:117] "RemoveContainer" containerID="2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339" Oct 8 19:51:31.013608 containerd[1437]: time="2024-10-08T19:51:31.013553007Z" level=info msg="RemoveContainer for \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\"" Oct 8 19:51:31.015816 containerd[1437]: time="2024-10-08T19:51:31.015775393Z" level=info msg="RemoveContainer for \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\" returns successfully" Oct 8 19:51:31.015962 kubelet[2529]: I1008 19:51:31.015930 2529 scope.go:117] "RemoveContainer" containerID="e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2" Oct 8 19:51:31.016966 containerd[1437]: time="2024-10-08T19:51:31.016939007Z" level=info msg="RemoveContainer for \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\"" Oct 8 19:51:31.019109 containerd[1437]: time="2024-10-08T19:51:31.019068513Z" level=info msg="RemoveContainer for \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\" returns successfully" Oct 8 19:51:31.019268 kubelet[2529]: I1008 19:51:31.019235 2529 scope.go:117] "RemoveContainer" containerID="59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7" Oct 8 19:51:31.019571 containerd[1437]: time="2024-10-08T19:51:31.019482038Z" level=error msg="ContainerStatus for \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\": not found" Oct 8 19:51:31.019678 kubelet[2529]: E1008 19:51:31.019649 2529 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\": not found" containerID="59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7" Oct 8 19:51:31.019716 kubelet[2529]: I1008 19:51:31.019683 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7"} err="failed to get container status \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"59e6c59d5ff568cd5f07d94cc3b8dfceabf7792026996b058396d9f84cdfb2d7\": not found" Oct 8 19:51:31.019716 kubelet[2529]: I1008 19:51:31.019707 2529 scope.go:117] "RemoveContainer" containerID="8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259" Oct 8 19:51:31.019898 containerd[1437]: time="2024-10-08T19:51:31.019864883Z" level=error msg="ContainerStatus for \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\": not found" Oct 8 19:51:31.020030 kubelet[2529]: E1008 19:51:31.019997 2529 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\": not found" containerID="8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259" Oct 8 19:51:31.020088 kubelet[2529]: I1008 19:51:31.020026 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259"} err="failed to get container status \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e8e916f13d58f55e8796d8956402fc22dbd9701f9bd23f177c9bc13cc22b259\": not found" Oct 8 19:51:31.020088 kubelet[2529]: I1008 19:51:31.020043 2529 scope.go:117] "RemoveContainer" containerID="2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d" Oct 8 19:51:31.020262 containerd[1437]: time="2024-10-08T19:51:31.020214247Z" level=error msg="ContainerStatus for \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\": not found" Oct 8 19:51:31.020346 kubelet[2529]: E1008 19:51:31.020323 2529 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\": not found" containerID="2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d" Oct 8 19:51:31.020384 kubelet[2529]: I1008 19:51:31.020351 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d"} err="failed to get container status \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ec5a60292ede51e120263eec0f637c3a212b5c8c8cf52e64ee5d160a32a3f3d\": not found" Oct 8 19:51:31.020384 kubelet[2529]: I1008 19:51:31.020366 2529 scope.go:117] "RemoveContainer" containerID="2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339" Oct 8 19:51:31.020562 containerd[1437]: time="2024-10-08T19:51:31.020528731Z" level=error msg="ContainerStatus for \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\": not found" Oct 8 19:51:31.020666 kubelet[2529]: E1008 19:51:31.020646 2529 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\": not found" containerID="2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339" Oct 8 19:51:31.020715 kubelet[2529]: I1008 19:51:31.020675 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339"} err="failed to get container status \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ad1c7dc56d68f7142bf75e3df2e892b4544fc3e20bebd011a65fec88b18b339\": not found" Oct 8 19:51:31.020715 kubelet[2529]: I1008 19:51:31.020691 2529 scope.go:117] "RemoveContainer" containerID="e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2" Oct 8 19:51:31.021025 kubelet[2529]: E1008 19:51:31.021003 2529 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\": not found" containerID="e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2" Oct 8 19:51:31.021065 containerd[1437]: time="2024-10-08T19:51:31.020865215Z" level=error msg="ContainerStatus for \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\": not found" Oct 8 19:51:31.021109 kubelet[2529]: I1008 19:51:31.021022 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2"} err="failed to get container status \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e848dbcb3640d8bc42eeb075537a0f9793f7007b7195912ad468f88bd4461db2\": not found" Oct 8 19:51:31.625538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57dd02bde9f28bee7bdc66ebcbf1046bbc2c19f42b4439a59aa30ec66df69496-rootfs.mount: Deactivated successfully. Oct 8 19:51:31.625638 systemd[1]: var-lib-kubelet-pods-cb28674c\x2d61d8\x2d4f96\x2d9e0f\x2d06c2286504ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qc76.mount: Deactivated successfully. Oct 8 19:51:31.625693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9552b39e077b620af0780c88cb52bf6f41f1906fa4afc31e5ffe988321730a36-rootfs.mount: Deactivated successfully. Oct 8 19:51:31.625745 systemd[1]: var-lib-kubelet-pods-2e78eb9a\x2d91c1\x2d4cff\x2db7a2\x2d30f04e8f7c02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54g44.mount: Deactivated successfully. Oct 8 19:51:31.625793 systemd[1]: var-lib-kubelet-pods-2e78eb9a\x2d91c1\x2d4cff\x2db7a2\x2d30f04e8f7c02-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 19:51:31.625841 systemd[1]: var-lib-kubelet-pods-2e78eb9a\x2d91c1\x2d4cff\x2db7a2\x2d30f04e8f7c02-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 19:51:31.731013 kubelet[2529]: I1008 19:51:31.730956 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" path="/var/lib/kubelet/pods/2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02/volumes" Oct 8 19:51:31.732015 kubelet[2529]: I1008 19:51:31.731522 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb28674c-61d8-4f96-9e0f-06c2286504ef" path="/var/lib/kubelet/pods/cb28674c-61d8-4f96-9e0f-06c2286504ef/volumes" Oct 8 19:51:32.538704 sshd[4168]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:32.544558 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:54150.service: Deactivated successfully. Oct 8 19:51:32.546304 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:51:32.546520 systemd[1]: session-23.scope: Consumed 1.569s CPU time. Oct 8 19:51:32.547628 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:51:32.552295 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:42396.service - OpenSSH per-connection server daemon (10.0.0.1:42396). Oct 8 19:51:32.553821 systemd-logind[1425]: Removed session 23. Oct 8 19:51:32.585214 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 42396 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:32.586577 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:32.591832 systemd-logind[1425]: New session 24 of user core. Oct 8 19:51:32.603125 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:51:33.363092 sshd[4333]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:33.366851 kubelet[2529]: I1008 19:51:33.366773 2529 topology_manager.go:215] "Topology Admit Handler" podUID="585f84b4-0e1f-42f4-a23d-92743e8f66c8" podNamespace="kube-system" podName="cilium-mv5kc" Oct 8 19:51:33.374152 kubelet[2529]: E1008 19:51:33.366929 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" containerName="mount-bpf-fs" Oct 8 19:51:33.374152 kubelet[2529]: E1008 19:51:33.366939 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" containerName="clean-cilium-state" Oct 8 19:51:33.374152 kubelet[2529]: E1008 19:51:33.366946 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" containerName="mount-cgroup" Oct 8 19:51:33.374152 kubelet[2529]: E1008 19:51:33.366952 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" containerName="apply-sysctl-overwrites" Oct 8 19:51:33.374152 kubelet[2529]: E1008 19:51:33.366959 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb28674c-61d8-4f96-9e0f-06c2286504ef" containerName="cilium-operator" Oct 8 19:51:33.374152 kubelet[2529]: E1008 19:51:33.366966 2529 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" containerName="cilium-agent" Oct 8 19:51:33.374152 kubelet[2529]: I1008 19:51:33.366997 2529 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e78eb9a-91c1-4cff-b7a2-30f04e8f7c02" containerName="cilium-agent" Oct 8 19:51:33.374152 kubelet[2529]: I1008 19:51:33.367005 2529 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb28674c-61d8-4f96-9e0f-06c2286504ef" containerName="cilium-operator" Oct 8 19:51:33.374152 kubelet[2529]: W1008 19:51:33.369612 2529 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:51:33.374152 kubelet[2529]: W1008 19:51:33.370398 2529 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:51:33.374392 kubelet[2529]: W1008 19:51:33.370846 2529 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:51:33.375092 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:42396.service: Deactivated successfully. Oct 8 19:51:33.377873 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:51:33.383088 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:51:33.384340 kubelet[2529]: E1008 19:51:33.384293 2529 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:51:33.384961 kubelet[2529]: E1008 19:51:33.384916 2529 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:51:33.386081 kubelet[2529]: E1008 19:51:33.386044 2529 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:51:33.393770 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:42412.service - OpenSSH per-connection server daemon (10.0.0.1:42412). Oct 8 19:51:33.397995 systemd-logind[1425]: Removed session 24. Oct 8 19:51:33.402807 systemd[1]: Created slice kubepods-burstable-pod585f84b4_0e1f_42f4_a23d_92743e8f66c8.slice - libcontainer container kubepods-burstable-pod585f84b4_0e1f_42f4_a23d_92743e8f66c8.slice. Oct 8 19:51:33.429804 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 42412 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:33.431230 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:33.434683 systemd-logind[1425]: New session 25 of user core. Oct 8 19:51:33.456127 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:51:33.505801 sshd[4346]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:33.512086 kubelet[2529]: I1008 19:51:33.512052 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-host-proc-sys-net\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512168 kubelet[2529]: I1008 19:51:33.512096 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-host-proc-sys-kernel\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512168 kubelet[2529]: I1008 19:51:33.512117 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/585f84b4-0e1f-42f4-a23d-92743e8f66c8-clustermesh-secrets\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512168 kubelet[2529]: I1008 19:51:33.512135 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-hostproc\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512168 kubelet[2529]: I1008 19:51:33.512170 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cni-path\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512284 kubelet[2529]: I1008 19:51:33.512187 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-etc-cni-netd\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512284 kubelet[2529]: I1008 19:51:33.512202 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9h4r\" (UniqueName: \"kubernetes.io/projected/585f84b4-0e1f-42f4-a23d-92743e8f66c8-kube-api-access-z9h4r\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512284 kubelet[2529]: I1008 19:51:33.512220 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cilium-run\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512284 kubelet[2529]: I1008 19:51:33.512236 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-xtables-lock\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512284 kubelet[2529]: I1008 19:51:33.512251 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/585f84b4-0e1f-42f4-a23d-92743e8f66c8-hubble-tls\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512284 kubelet[2529]: I1008 19:51:33.512267 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-bpf-maps\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512408 kubelet[2529]: I1008 19:51:33.512282 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cilium-cgroup\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512408 kubelet[2529]: I1008 19:51:33.512295 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/585f84b4-0e1f-42f4-a23d-92743e8f66c8-lib-modules\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512408 kubelet[2529]: I1008 19:51:33.512311 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cilium-config-path\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.512408 kubelet[2529]: I1008 19:51:33.512328 2529 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cilium-ipsec-secrets\") pod \"cilium-mv5kc\" (UID: \"585f84b4-0e1f-42f4-a23d-92743e8f66c8\") " pod="kube-system/cilium-mv5kc" Oct 8 19:51:33.516345 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:42412.service: Deactivated successfully. Oct 8 19:51:33.517752 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:51:33.519070 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:51:33.536439 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:42414.service - OpenSSH per-connection server daemon (10.0.0.1:42414). Oct 8 19:51:33.537356 systemd-logind[1425]: Removed session 25. Oct 8 19:51:33.570706 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 42414 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:51:33.571887 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:33.575289 systemd-logind[1425]: New session 26 of user core. Oct 8 19:51:33.583107 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:51:33.791286 kubelet[2529]: E1008 19:51:33.791235 2529 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:51:34.615438 kubelet[2529]: E1008 19:51:34.615385 2529 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:51:34.615770 kubelet[2529]: E1008 19:51:34.615495 2529 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cilium-config-path podName:585f84b4-0e1f-42f4-a23d-92743e8f66c8 nodeName:}" failed. No retries permitted until 2024-10-08 19:51:35.115469529 +0000 UTC m=+81.458388416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/585f84b4-0e1f-42f4-a23d-92743e8f66c8-cilium-config-path") pod "cilium-mv5kc" (UID: "585f84b4-0e1f-42f4-a23d-92743e8f66c8") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:51:34.616417 kubelet[2529]: E1008 19:51:34.616347 2529 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Oct 8 19:51:34.616417 kubelet[2529]: E1008 19:51:34.616377 2529 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-mv5kc: failed to sync secret cache: timed out waiting for the condition Oct 8 19:51:34.616505 kubelet[2529]: E1008 19:51:34.616436 2529 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/585f84b4-0e1f-42f4-a23d-92743e8f66c8-hubble-tls podName:585f84b4-0e1f-42f4-a23d-92743e8f66c8 nodeName:}" failed. No retries permitted until 2024-10-08 19:51:35.116420299 +0000 UTC m=+81.459339226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/585f84b4-0e1f-42f4-a23d-92743e8f66c8-hubble-tls") pod "cilium-mv5kc" (UID: "585f84b4-0e1f-42f4-a23d-92743e8f66c8") : failed to sync secret cache: timed out waiting for the condition Oct 8 19:51:35.208471 kubelet[2529]: E1008 19:51:35.208417 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:35.209063 containerd[1437]: time="2024-10-08T19:51:35.208953545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mv5kc,Uid:585f84b4-0e1f-42f4-a23d-92743e8f66c8,Namespace:kube-system,Attempt:0,}" Oct 8 19:51:35.237654 containerd[1437]: time="2024-10-08T19:51:35.237298688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:51:35.237654 containerd[1437]: time="2024-10-08T19:51:35.237366168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:35.237654 containerd[1437]: time="2024-10-08T19:51:35.237387609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:51:35.237654 containerd[1437]: time="2024-10-08T19:51:35.237403049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:51:35.267202 systemd[1]: Started cri-containerd-9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c.scope - libcontainer container 9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c. Oct 8 19:51:35.297050 containerd[1437]: time="2024-10-08T19:51:35.295815632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mv5kc,Uid:585f84b4-0e1f-42f4-a23d-92743e8f66c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\"" Oct 8 19:51:35.297150 kubelet[2529]: E1008 19:51:35.296944 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:35.302193 containerd[1437]: time="2024-10-08T19:51:35.302136179Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:51:35.314847 containerd[1437]: time="2024-10-08T19:51:35.314789154Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830\"" Oct 8 19:51:35.315603 containerd[1437]: time="2024-10-08T19:51:35.315538202Z" level=info msg="StartContainer for \"8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830\"" Oct 8 19:51:35.343153 systemd[1]: Started cri-containerd-8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830.scope - libcontainer container 8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830. Oct 8 19:51:35.362563 containerd[1437]: time="2024-10-08T19:51:35.362518663Z" level=info msg="StartContainer for \"8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830\" returns successfully" Oct 8 19:51:35.373297 systemd[1]: cri-containerd-8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830.scope: Deactivated successfully. Oct 8 19:51:35.400278 containerd[1437]: time="2024-10-08T19:51:35.400221505Z" level=info msg="shim disconnected" id=8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830 namespace=k8s.io Oct 8 19:51:35.400492 containerd[1437]: time="2024-10-08T19:51:35.400291786Z" level=warning msg="cleaning up after shim disconnected" id=8dd2fe0e21522e55c2479b48eb6533e90615c1c3511bb6eaca041cedda247830 namespace=k8s.io Oct 8 19:51:35.400492 containerd[1437]: time="2024-10-08T19:51:35.400304466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:35.670661 kubelet[2529]: I1008 19:51:35.670307 2529 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T19:51:35Z","lastTransitionTime":"2024-10-08T19:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 19:51:35.996846 kubelet[2529]: E1008 19:51:35.996743 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:36.007463 containerd[1437]: time="2024-10-08T19:51:36.007415299Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:51:36.017289 containerd[1437]: time="2024-10-08T19:51:36.017240241Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded\"" Oct 8 19:51:36.017900 containerd[1437]: time="2024-10-08T19:51:36.017874767Z" level=info msg="StartContainer for \"4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded\"" Oct 8 19:51:36.066188 systemd[1]: Started cri-containerd-4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded.scope - libcontainer container 4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded. Oct 8 19:51:36.086187 containerd[1437]: time="2024-10-08T19:51:36.086135313Z" level=info msg="StartContainer for \"4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded\" returns successfully" Oct 8 19:51:36.094005 systemd[1]: cri-containerd-4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded.scope: Deactivated successfully. Oct 8 19:51:36.122313 containerd[1437]: time="2024-10-08T19:51:36.122259247Z" level=info msg="shim disconnected" id=4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded namespace=k8s.io Oct 8 19:51:36.123456 containerd[1437]: time="2024-10-08T19:51:36.122478209Z" level=warning msg="cleaning up after shim disconnected" id=4d16ed1eca1fdc508f8687ca90e29850944f99819d635c2e8a59b3a5df173ded namespace=k8s.io Oct 8 19:51:36.125115 containerd[1437]: time="2024-10-08T19:51:36.125080476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:37.000743 kubelet[2529]: E1008 19:51:37.000517 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:37.004003 containerd[1437]: time="2024-10-08T19:51:37.003882887Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:51:37.024182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376737404.mount: Deactivated successfully. Oct 8 19:51:37.025652 containerd[1437]: time="2024-10-08T19:51:37.025600545Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da\"" Oct 8 19:51:37.026169 containerd[1437]: time="2024-10-08T19:51:37.026090749Z" level=info msg="StartContainer for \"6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da\"" Oct 8 19:51:37.062203 systemd[1]: Started cri-containerd-6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da.scope - libcontainer container 6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da. Oct 8 19:51:37.084719 containerd[1437]: time="2024-10-08T19:51:37.084670097Z" level=info msg="StartContainer for \"6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da\" returns successfully" Oct 8 19:51:37.085472 systemd[1]: cri-containerd-6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da.scope: Deactivated successfully. Oct 8 19:51:37.112694 containerd[1437]: time="2024-10-08T19:51:37.112612938Z" level=info msg="shim disconnected" id=6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da namespace=k8s.io Oct 8 19:51:37.112694 containerd[1437]: time="2024-10-08T19:51:37.112690939Z" level=warning msg="cleaning up after shim disconnected" id=6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da namespace=k8s.io Oct 8 19:51:37.112952 containerd[1437]: time="2024-10-08T19:51:37.112700219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:37.123456 containerd[1437]: time="2024-10-08T19:51:37.123390526Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:51:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:51:37.126497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6965478a4f7f13556453bb38ab5e71a7d02b7a3cd2191fb698e052232574c8da-rootfs.mount: Deactivated successfully. Oct 8 19:51:38.006027 kubelet[2529]: E1008 19:51:38.005576 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:38.012082 containerd[1437]: time="2024-10-08T19:51:38.009588936Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:51:38.029237 containerd[1437]: time="2024-10-08T19:51:38.029110366Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07\"" Oct 8 19:51:38.029777 containerd[1437]: time="2024-10-08T19:51:38.029751573Z" level=info msg="StartContainer for \"b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07\"" Oct 8 19:51:38.054224 systemd[1]: Started cri-containerd-b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07.scope - libcontainer container b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07. Oct 8 19:51:38.081654 systemd[1]: cri-containerd-b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07.scope: Deactivated successfully. Oct 8 19:51:38.086126 containerd[1437]: time="2024-10-08T19:51:38.086071241Z" level=info msg="StartContainer for \"b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07\" returns successfully" Oct 8 19:51:38.111918 containerd[1437]: time="2024-10-08T19:51:38.111854812Z" level=info msg="shim disconnected" id=b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07 namespace=k8s.io Oct 8 19:51:38.111918 containerd[1437]: time="2024-10-08T19:51:38.111911732Z" level=warning msg="cleaning up after shim disconnected" id=b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07 namespace=k8s.io Oct 8 19:51:38.111918 containerd[1437]: time="2024-10-08T19:51:38.111920693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:51:38.127150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b20e672efd35eb37e4a03710ad11b91efe2375d31cb3850d06c619afd82a5b07-rootfs.mount: Deactivated successfully. Oct 8 19:51:38.792998 kubelet[2529]: E1008 19:51:38.792949 2529 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:51:39.010348 kubelet[2529]: E1008 19:51:39.010311 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:39.013763 containerd[1437]: time="2024-10-08T19:51:39.013292104Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:51:39.025638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338582593.mount: Deactivated successfully. Oct 8 19:51:39.026252 containerd[1437]: time="2024-10-08T19:51:39.026193786Z" level=info msg="CreateContainer within sandbox \"9f5aa6a50486b4d8e1ca6e5d99f58b77cf8988ddd91b5eb6fc921442bdf0285c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"52fa8673cbc10ebf7dfbfa0948ff175e426e2a3c2df5feab64e7045d6b67bf12\"" Oct 8 19:51:39.028025 containerd[1437]: time="2024-10-08T19:51:39.027955842Z" level=info msg="StartContainer for \"52fa8673cbc10ebf7dfbfa0948ff175e426e2a3c2df5feab64e7045d6b67bf12\"" Oct 8 19:51:39.055156 systemd[1]: Started cri-containerd-52fa8673cbc10ebf7dfbfa0948ff175e426e2a3c2df5feab64e7045d6b67bf12.scope - libcontainer container 52fa8673cbc10ebf7dfbfa0948ff175e426e2a3c2df5feab64e7045d6b67bf12. Oct 8 19:51:39.077540 containerd[1437]: time="2024-10-08T19:51:39.077422029Z" level=info msg="StartContainer for \"52fa8673cbc10ebf7dfbfa0948ff175e426e2a3c2df5feab64e7045d6b67bf12\" returns successfully" Oct 8 19:51:39.347003 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 8 19:51:39.729781 kubelet[2529]: E1008 19:51:39.729745 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:39.729965 kubelet[2529]: E1008 19:51:39.729946 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:40.014874 kubelet[2529]: E1008 19:51:40.014774 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:41.210331 kubelet[2529]: E1008 19:51:41.210297 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:42.106730 systemd-networkd[1372]: lxc_health: Link UP Oct 8 19:51:42.115177 systemd-networkd[1372]: lxc_health: Gained carrier Oct 8 19:51:43.203115 systemd-networkd[1372]: lxc_health: Gained IPv6LL Oct 8 19:51:43.211495 kubelet[2529]: E1008 19:51:43.211457 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:43.229850 kubelet[2529]: I1008 19:51:43.229783 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mv5kc" podStartSLOduration=10.229766061 podStartE2EDuration="10.229766061s" podCreationTimestamp="2024-10-08 19:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:51:40.042471251 +0000 UTC m=+86.385390138" watchObservedRunningTime="2024-10-08 19:51:43.229766061 +0000 UTC m=+89.572684988" Oct 8 19:51:43.731459 kubelet[2529]: E1008 19:51:43.731408 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:44.021991 kubelet[2529]: E1008 19:51:44.021850 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:45.023763 kubelet[2529]: E1008 19:51:45.023717 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:51:48.324427 sshd[4354]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:48.326872 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:42414.service: Deactivated successfully. Oct 8 19:51:48.328641 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:51:48.330232 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:51:48.331345 systemd-logind[1425]: Removed session 26.