Oct 8 20:00:14.924177 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 20:00:14.924198 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 20:00:14.924209 kernel: KASLR enabled Oct 8 20:00:14.924215 kernel: efi: EFI v2.7 by EDK II Oct 8 20:00:14.924220 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 20:00:14.924226 kernel: random: crng init done Oct 8 20:00:14.924233 kernel: ACPI: Early table checksum verification disabled Oct 8 20:00:14.924239 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 20:00:14.924245 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 20:00:14.924252 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924259 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924264 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924270 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924276 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924284 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924292 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924298 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924304 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:14.924311 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 20:00:14.924317 kernel: NUMA: Failed to initialise from firmware Oct 8 20:00:14.924324 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:00:14.924330 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Oct 8 20:00:14.924337 kernel: Zone ranges: Oct 8 20:00:14.924343 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:00:14.924349 kernel: DMA32 empty Oct 8 20:00:14.924357 kernel: Normal empty Oct 8 20:00:14.924363 kernel: Movable zone start for each node Oct 8 20:00:14.924369 kernel: Early memory node ranges Oct 8 20:00:14.924375 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 20:00:14.924382 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 20:00:14.924388 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 20:00:14.924394 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 20:00:14.924401 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 20:00:14.924407 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 20:00:14.924413 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 20:00:14.924420 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:00:14.924426 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 20:00:14.924434 kernel: psci: probing for conduit method from ACPI. Oct 8 20:00:14.924440 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 20:00:14.924447 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 20:00:14.924456 kernel: psci: Trusted OS migration not required Oct 8 20:00:14.924470 kernel: psci: SMC Calling Convention v1.1 Oct 8 20:00:14.924477 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 20:00:14.924487 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 20:00:14.924494 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 20:00:14.924501 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 20:00:14.924507 kernel: Detected PIPT I-cache on CPU0 Oct 8 20:00:14.924537 kernel: CPU features: detected: GIC system register CPU interface Oct 8 20:00:14.924572 kernel: CPU features: detected: Hardware dirty bit management Oct 8 20:00:14.924579 kernel: CPU features: detected: Spectre-v4 Oct 8 20:00:14.924586 kernel: CPU features: detected: Spectre-BHB Oct 8 20:00:14.924592 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 20:00:14.924599 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 20:00:14.924622 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 20:00:14.924629 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 20:00:14.924636 kernel: alternatives: applying boot alternatives Oct 8 20:00:14.924644 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 20:00:14.924651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:00:14.924658 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:00:14.924665 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:00:14.924672 kernel: Fallback order for Node 0: 0 Oct 8 20:00:14.924686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 20:00:14.924693 kernel: Policy zone: DMA Oct 8 20:00:14.924700 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:00:14.924709 kernel: software IO TLB: area num 4. Oct 8 20:00:14.924716 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 20:00:14.924724 kernel: Memory: 2386460K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 185828K reserved, 0K cma-reserved) Oct 8 20:00:14.924730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 20:00:14.924737 kernel: trace event string verifier disabled Oct 8 20:00:14.924744 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:00:14.924751 kernel: rcu: RCU event tracing is enabled. Oct 8 20:00:14.924758 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 20:00:14.924765 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:00:14.924772 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:00:14.924779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:00:14.924786 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 20:00:14.924794 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 20:00:14.924800 kernel: GICv3: 256 SPIs implemented Oct 8 20:00:14.924807 kernel: GICv3: 0 Extended SPIs implemented Oct 8 20:00:14.924814 kernel: Root IRQ handler: gic_handle_irq Oct 8 20:00:14.924821 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 20:00:14.924828 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 20:00:14.924834 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 20:00:14.924841 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 20:00:14.924848 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 20:00:14.924855 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 20:00:14.924862 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 20:00:14.924870 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:00:14.924877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:00:14.924883 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 20:00:14.924890 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 20:00:14.924897 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 20:00:14.924904 kernel: arm-pv: using stolen time PV Oct 8 20:00:14.924911 kernel: Console: colour dummy device 80x25 Oct 8 20:00:14.924918 kernel: ACPI: Core revision 20230628 Oct 8 20:00:14.924925 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 20:00:14.924932 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:00:14.924940 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:00:14.924947 kernel: landlock: Up and running. Oct 8 20:00:14.924954 kernel: SELinux: Initializing. Oct 8 20:00:14.924961 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:00:14.924968 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:00:14.924975 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:14.924981 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:14.924988 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:00:14.924995 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:00:14.925003 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 20:00:14.925010 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 20:00:14.925017 kernel: Remapping and enabling EFI services. Oct 8 20:00:14.925024 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:00:14.925031 kernel: Detected PIPT I-cache on CPU1 Oct 8 20:00:14.925038 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 20:00:14.925045 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 20:00:14.925052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:00:14.925058 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 20:00:14.925065 kernel: Detected PIPT I-cache on CPU2 Oct 8 20:00:14.925074 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 20:00:14.925081 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 20:00:14.925093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:00:14.925101 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 20:00:14.925108 kernel: Detected PIPT I-cache on CPU3 Oct 8 20:00:14.925115 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 20:00:14.925123 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 20:00:14.925130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:00:14.925137 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 20:00:14.925146 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 20:00:14.925153 kernel: SMP: Total of 4 processors activated. Oct 8 20:00:14.925160 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 20:00:14.925167 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 20:00:14.925175 kernel: CPU features: detected: Common not Private translations Oct 8 20:00:14.925182 kernel: CPU features: detected: CRC32 instructions Oct 8 20:00:14.925189 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 20:00:14.925196 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 20:00:14.925205 kernel: CPU features: detected: LSE atomic instructions Oct 8 20:00:14.925212 kernel: CPU features: detected: Privileged Access Never Oct 8 20:00:14.925220 kernel: CPU features: detected: RAS Extension Support Oct 8 20:00:14.925227 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 20:00:14.925234 kernel: CPU: All CPU(s) started at EL1 Oct 8 20:00:14.925241 kernel: alternatives: applying system-wide alternatives Oct 8 20:00:14.925249 kernel: devtmpfs: initialized Oct 8 20:00:14.925256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:00:14.925263 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 20:00:14.925272 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:00:14.925280 kernel: SMBIOS 3.0.0 present. Oct 8 20:00:14.925287 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 20:00:14.925294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:00:14.925302 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 20:00:14.925309 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 20:00:14.925316 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 20:00:14.925324 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:00:14.925331 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Oct 8 20:00:14.925339 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:00:14.925347 kernel: cpuidle: using governor menu Oct 8 20:00:14.925354 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 20:00:14.925361 kernel: ASID allocator initialised with 32768 entries Oct 8 20:00:14.925368 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:00:14.925376 kernel: Serial: AMBA PL011 UART driver Oct 8 20:00:14.925383 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 20:00:14.925390 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 20:00:14.925397 kernel: Modules: 509024 pages in range for PLT usage Oct 8 20:00:14.925406 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:00:14.925413 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:00:14.925421 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 20:00:14.925428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 20:00:14.925441 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:00:14.925450 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:00:14.925464 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 20:00:14.925471 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 20:00:14.925479 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:00:14.925488 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:00:14.925495 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:00:14.925503 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:00:14.925510 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:00:14.925517 kernel: ACPI: Interpreter enabled Oct 8 20:00:14.925525 kernel: ACPI: Using GIC for interrupt routing Oct 8 20:00:14.925532 kernel: ACPI: MCFG table detected, 1 entries Oct 8 20:00:14.925539 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 20:00:14.925547 kernel: printk: console [ttyAMA0] enabled Oct 8 20:00:14.925555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:00:14.925714 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:00:14.925795 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 20:00:14.925861 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 20:00:14.925927 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 20:00:14.925997 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 20:00:14.926007 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 20:00:14.926018 kernel: PCI host bridge to bus 0000:00 Oct 8 20:00:14.926089 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 20:00:14.926148 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 20:00:14.926206 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 20:00:14.926262 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:00:14.926345 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 20:00:14.926431 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:00:14.926544 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 20:00:14.926618 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 20:00:14.926706 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:00:14.926774 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:00:14.926837 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 20:00:14.926900 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 20:00:14.926959 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 20:00:14.927020 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 20:00:14.927076 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 20:00:14.927086 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 20:00:14.927093 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 20:00:14.927101 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 20:00:14.927109 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 20:00:14.927116 kernel: iommu: Default domain type: Translated Oct 8 20:00:14.927123 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 20:00:14.927132 kernel: efivars: Registered efivars operations Oct 8 20:00:14.927139 kernel: vgaarb: loaded Oct 8 20:00:14.927147 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 20:00:14.927154 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:00:14.927162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:00:14.927169 kernel: pnp: PnP ACPI init Oct 8 20:00:14.927249 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 20:00:14.927260 kernel: pnp: PnP ACPI: found 1 devices Oct 8 20:00:14.927269 kernel: NET: Registered PF_INET protocol family Oct 8 20:00:14.927276 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:00:14.927284 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 20:00:14.927291 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:00:14.927299 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:00:14.927306 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 20:00:14.927313 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 20:00:14.927321 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:00:14.927328 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:00:14.927337 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:00:14.927344 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:00:14.927351 kernel: kvm [1]: HYP mode not available Oct 8 20:00:14.927359 kernel: Initialise system trusted keyrings Oct 8 20:00:14.927366 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 20:00:14.927373 kernel: Key type asymmetric registered Oct 8 20:00:14.927381 kernel: Asymmetric key parser 'x509' registered Oct 8 20:00:14.927388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 20:00:14.927395 kernel: io scheduler mq-deadline registered Oct 8 20:00:14.927404 kernel: io scheduler kyber registered Oct 8 20:00:14.927411 kernel: io scheduler bfq registered Oct 8 20:00:14.927419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 20:00:14.927426 kernel: ACPI: button: Power Button [PWRB] Oct 8 20:00:14.927434 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 20:00:14.927506 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 20:00:14.927516 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:00:14.927523 kernel: thunder_xcv, ver 1.0 Oct 8 20:00:14.927531 kernel: thunder_bgx, ver 1.0 Oct 8 20:00:14.927540 kernel: nicpf, ver 1.0 Oct 8 20:00:14.927547 kernel: nicvf, ver 1.0 Oct 8 20:00:14.927627 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 20:00:14.927721 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T20:00:14 UTC (1728417614) Oct 8 20:00:14.927737 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:00:14.927748 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 20:00:14.927756 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 20:00:14.927763 kernel: watchdog: Hard watchdog permanently disabled Oct 8 20:00:14.927774 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:00:14.927781 kernel: Segment Routing with IPv6 Oct 8 20:00:14.927788 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:00:14.927796 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:00:14.927803 kernel: Key type dns_resolver registered Oct 8 20:00:14.927810 kernel: registered taskstats version 1 Oct 8 20:00:14.927817 kernel: Loading compiled-in X.509 certificates Oct 8 20:00:14.927825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 20:00:14.927832 kernel: Key type .fscrypt registered Oct 8 20:00:14.927841 kernel: Key type fscrypt-provisioning registered Oct 8 20:00:14.927848 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:00:14.927855 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:00:14.927863 kernel: ima: No architecture policies found Oct 8 20:00:14.927870 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 20:00:14.927877 kernel: clk: Disabling unused clocks Oct 8 20:00:14.927884 kernel: Freeing unused kernel memory: 39360K Oct 8 20:00:14.927892 kernel: Run /init as init process Oct 8 20:00:14.927899 kernel: with arguments: Oct 8 20:00:14.927908 kernel: /init Oct 8 20:00:14.927914 kernel: with environment: Oct 8 20:00:14.927922 kernel: HOME=/ Oct 8 20:00:14.927929 kernel: TERM=linux Oct 8 20:00:14.927936 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:00:14.927945 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:00:14.927954 systemd[1]: Detected virtualization kvm. Oct 8 20:00:14.927962 systemd[1]: Detected architecture arm64. Oct 8 20:00:14.927971 systemd[1]: Running in initrd. Oct 8 20:00:14.927979 systemd[1]: No hostname configured, using default hostname. Oct 8 20:00:14.927986 systemd[1]: Hostname set to . Oct 8 20:00:14.927995 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:00:14.928002 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:00:14.928010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:14.928018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:14.928026 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:00:14.928035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:00:14.928043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:00:14.928051 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:00:14.928060 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:00:14.928068 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:00:14.928076 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:14.928084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:14.928093 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:00:14.928101 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:00:14.928109 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:00:14.928116 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:00:14.928124 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:00:14.928132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:00:14.928140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:00:14.928148 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:00:14.928157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:14.928165 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:14.928173 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:14.928180 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:00:14.928189 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:00:14.928204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:00:14.928219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:00:14.928227 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:00:14.928235 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:00:14.928244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:00:14.928252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:14.928260 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:00:14.928268 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:14.928276 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:00:14.928301 systemd-journald[238]: Collecting audit messages is disabled. Oct 8 20:00:14.928322 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:00:14.928330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:14.928340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:00:14.928348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:00:14.928356 systemd-journald[238]: Journal started Oct 8 20:00:14.928374 systemd-journald[238]: Runtime Journal (/run/log/journal/f7339d87e89e460c9c470422429959e4) is 5.9M, max 47.3M, 41.4M free. Oct 8 20:00:14.915802 systemd-modules-load[239]: Inserted module 'overlay' Oct 8 20:00:14.933267 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 8 20:00:14.934896 kernel: Bridge firewalling registered Oct 8 20:00:14.934916 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:00:14.936138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:14.943847 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:14.945578 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:00:14.950204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:00:14.953387 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:00:14.957404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:14.961347 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:14.965956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:14.967433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:14.979918 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:00:14.982272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:00:14.989250 dracut-cmdline[277]: dracut-dracut-053 Oct 8 20:00:14.991772 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 20:00:15.019214 systemd-resolved[279]: Positive Trust Anchors: Oct 8 20:00:15.019232 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:00:15.019264 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:00:15.024119 systemd-resolved[279]: Defaulting to hostname 'linux'. Oct 8 20:00:15.027612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:00:15.028789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:15.054708 kernel: SCSI subsystem initialized Oct 8 20:00:15.059693 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:00:15.066702 kernel: iscsi: registered transport (tcp) Oct 8 20:00:15.079817 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:00:15.079871 kernel: QLogic iSCSI HBA Driver Oct 8 20:00:15.123406 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:00:15.134815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:00:15.153315 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:00:15.153360 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:00:15.154978 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:00:15.201712 kernel: raid6: neonx8 gen() 15770 MB/s Oct 8 20:00:15.218704 kernel: raid6: neonx4 gen() 15676 MB/s Oct 8 20:00:15.235697 kernel: raid6: neonx2 gen() 13233 MB/s Oct 8 20:00:15.252703 kernel: raid6: neonx1 gen() 10470 MB/s Oct 8 20:00:15.269696 kernel: raid6: int64x8 gen() 6943 MB/s Oct 8 20:00:15.286696 kernel: raid6: int64x4 gen() 7349 MB/s Oct 8 20:00:15.303698 kernel: raid6: int64x2 gen() 6127 MB/s Oct 8 20:00:15.320815 kernel: raid6: int64x1 gen() 5055 MB/s Oct 8 20:00:15.320836 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s Oct 8 20:00:15.338839 kernel: raid6: .... xor() 12010 MB/s, rmw enabled Oct 8 20:00:15.338864 kernel: raid6: using neon recovery algorithm Oct 8 20:00:15.343697 kernel: xor: measuring software checksum speed Oct 8 20:00:15.345053 kernel: 8regs : 19778 MB/sec Oct 8 20:00:15.345070 kernel: 32regs : 18982 MB/sec Oct 8 20:00:15.346296 kernel: arm64_neon : 26857 MB/sec Oct 8 20:00:15.346314 kernel: xor: using function: arm64_neon (26857 MB/sec) Oct 8 20:00:15.399917 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:00:15.412354 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:00:15.431495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:15.445113 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 8 20:00:15.448366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:15.451666 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:00:15.469844 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Oct 8 20:00:15.502322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:00:15.515128 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:00:15.553470 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:15.560024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:00:15.575742 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:00:15.577311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:00:15.579596 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:15.582058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:00:15.593834 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:00:15.607732 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 20:00:15.607908 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 20:00:15.609999 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:00:15.614788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:00:15.614912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:15.618264 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:15.619422 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:00:15.619578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:15.628320 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:00:15.628343 kernel: GPT:9289727 != 19775487 Oct 8 20:00:15.628353 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:00:15.628362 kernel: GPT:9289727 != 19775487 Oct 8 20:00:15.628371 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:00:15.628388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:15.621802 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:15.634959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:15.644720 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (523) Oct 8 20:00:15.646718 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Oct 8 20:00:15.648617 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 20:00:15.652915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:15.660749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 20:00:15.664671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 20:00:15.665999 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 20:00:15.671816 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:00:15.683825 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:00:15.685706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:15.692839 disk-uuid[555]: Primary Header is updated. Oct 8 20:00:15.692839 disk-uuid[555]: Secondary Entries is updated. Oct 8 20:00:15.692839 disk-uuid[555]: Secondary Header is updated. Oct 8 20:00:15.697119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:15.712928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:16.715449 disk-uuid[556]: The operation has completed successfully. Oct 8 20:00:16.716553 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:16.742985 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:00:16.743081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:00:16.756870 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:00:16.759718 sh[578]: Success Oct 8 20:00:16.774970 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 20:00:16.812223 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:00:16.832979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:00:16.835386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:00:16.848239 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 20:00:16.848279 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:00:16.848291 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:00:16.850112 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:00:16.850138 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:00:16.862772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:00:16.864035 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:00:16.871832 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:00:16.873906 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:00:16.886176 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:00:16.886217 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:00:16.886233 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:16.888846 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:16.895241 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:00:16.897737 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:00:16.906539 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:00:16.911819 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:00:16.974076 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:00:16.987832 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:00:17.010527 ignition[686]: Ignition 2.19.0 Oct 8 20:00:17.010536 ignition[686]: Stage: fetch-offline Oct 8 20:00:17.010576 ignition[686]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:17.010585 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:17.010748 ignition[686]: parsed url from cmdline: "" Oct 8 20:00:17.010751 ignition[686]: no config URL provided Oct 8 20:00:17.010756 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:00:17.016800 systemd-networkd[770]: lo: Link UP Oct 8 20:00:17.010763 ignition[686]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:00:17.016803 systemd-networkd[770]: lo: Gained carrier Oct 8 20:00:17.010788 ignition[686]: op(1): [started] loading QEMU firmware config module Oct 8 20:00:17.017502 systemd-networkd[770]: Enumeration completed Oct 8 20:00:17.010792 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 20:00:17.017911 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:17.017914 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:00:17.018187 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:00:17.032958 ignition[686]: op(1): [finished] loading QEMU firmware config module Oct 8 20:00:17.020968 systemd-networkd[770]: eth0: Link UP Oct 8 20:00:17.020972 systemd-networkd[770]: eth0: Gained carrier Oct 8 20:00:17.020978 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:17.024324 systemd[1]: Reached target network.target - Network. Oct 8 20:00:17.044725 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:00:17.075607 ignition[686]: parsing config with SHA512: 754a2eb7f5abbdc6c59e06e4800e0181cf80f3e81cf32e63410a8906d4795e30b422ae38541c74ee05b7213a37c88c19afdcb58c64c8ad51953fcdf5c2d8a25e Oct 8 20:00:17.080812 unknown[686]: fetched base config from "system" Oct 8 20:00:17.080826 unknown[686]: fetched user config from "qemu" Oct 8 20:00:17.081525 ignition[686]: fetch-offline: fetch-offline passed Oct 8 20:00:17.083522 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:00:17.081627 ignition[686]: Ignition finished successfully Oct 8 20:00:17.084823 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 20:00:17.091862 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:00:17.102161 ignition[776]: Ignition 2.19.0 Oct 8 20:00:17.102171 ignition[776]: Stage: kargs Oct 8 20:00:17.102329 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:17.102338 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:17.103249 ignition[776]: kargs: kargs passed Oct 8 20:00:17.103294 ignition[776]: Ignition finished successfully Oct 8 20:00:17.106073 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:00:17.120878 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:00:17.129843 ignition[783]: Ignition 2.19.0 Oct 8 20:00:17.129852 ignition[783]: Stage: disks Oct 8 20:00:17.130012 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:17.130021 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:17.130932 ignition[783]: disks: disks passed Oct 8 20:00:17.132709 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:00:17.130975 ignition[783]: Ignition finished successfully Oct 8 20:00:17.134068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:00:17.135419 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:00:17.137316 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:00:17.138826 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:00:17.140625 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:00:17.153854 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:00:17.165861 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 20:00:17.169720 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:00:17.172304 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:00:17.219146 kernel: EXT4-fs (vda9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 20:00:17.219870 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:00:17.220704 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:00:17.230941 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:00:17.233930 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:00:17.234903 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:00:17.234940 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:00:17.234961 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:00:17.250389 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Oct 8 20:00:17.250412 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:00:17.250423 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:00:17.250433 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:17.239564 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:00:17.242787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:00:17.254863 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:17.258635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:00:17.308486 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:00:17.313321 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:00:17.318860 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:00:17.323142 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:00:17.401226 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:00:17.411788 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:00:17.415487 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:00:17.420739 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:00:17.438280 ignition[916]: INFO : Ignition 2.19.0 Oct 8 20:00:17.438280 ignition[916]: INFO : Stage: mount Oct 8 20:00:17.439818 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:17.439818 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:17.439818 ignition[916]: INFO : mount: mount passed Oct 8 20:00:17.439818 ignition[916]: INFO : Ignition finished successfully Oct 8 20:00:17.442234 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:00:17.444736 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:00:17.452800 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:00:17.844744 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:00:17.854834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:00:17.859696 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Oct 8 20:00:17.861787 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:00:17.861805 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:00:17.861815 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:17.864707 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:17.865633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:00:17.881060 ignition[948]: INFO : Ignition 2.19.0 Oct 8 20:00:17.881060 ignition[948]: INFO : Stage: files Oct 8 20:00:17.882705 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:17.882705 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:17.882705 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:00:17.886035 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:00:17.886035 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:00:17.886035 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:00:17.886035 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:00:17.886035 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:00:17.885160 unknown[948]: wrote ssh authorized keys file for user: core Oct 8 20:00:17.893285 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:00:17.893285 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 20:00:17.951772 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:00:18.332624 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:00:18.332624 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:00:18.336361 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 8 20:00:18.376815 systemd-networkd[770]: eth0: Gained IPv6LL Oct 8 20:00:18.678624 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 20:00:18.740213 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 20:00:18.742134 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 8 20:00:19.007853 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 20:00:19.216260 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 20:00:19.216260 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 20:00:19.219903 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 20:00:19.239978 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:00:19.243551 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:00:19.245835 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 20:00:19.245835 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:00:19.245835 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:00:19.245835 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:00:19.245835 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:00:19.245835 ignition[948]: INFO : files: files passed Oct 8 20:00:19.245835 ignition[948]: INFO : Ignition finished successfully Oct 8 20:00:19.247959 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:00:19.260866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:00:19.263325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:00:19.264939 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:00:19.265016 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:00:19.271193 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 20:00:19.273358 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:19.273358 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:19.276296 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:19.276260 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:00:19.277702 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:00:19.287880 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:00:19.304894 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:00:19.304990 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:00:19.307073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:00:19.308858 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:00:19.310613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:00:19.311287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:00:19.326323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:00:19.328615 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:00:19.339298 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:19.340507 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:19.342507 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:00:19.344219 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:00:19.344329 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:00:19.346720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:00:19.348741 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:00:19.350364 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:00:19.352054 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:00:19.353962 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:00:19.355851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:00:19.357622 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:00:19.359533 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:00:19.361443 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:00:19.363132 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:00:19.364583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:00:19.364708 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:00:19.366961 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:19.368835 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:19.370802 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:00:19.371730 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:19.373720 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:00:19.373827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:00:19.376535 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:00:19.376652 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:00:19.378631 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:00:19.380272 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:00:19.380362 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:19.382332 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:00:19.383849 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:00:19.385550 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:00:19.385635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:00:19.387716 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:00:19.387802 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:00:19.389323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:00:19.389433 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:00:19.391120 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:00:19.391226 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:00:19.401926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:00:19.403653 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:00:19.403802 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:19.406820 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:00:19.407670 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:00:19.407823 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:19.409716 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:00:19.409820 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:00:19.415275 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:00:19.416333 ignition[1002]: INFO : Ignition 2.19.0 Oct 8 20:00:19.416333 ignition[1002]: INFO : Stage: umount Oct 8 20:00:19.416333 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:19.416333 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:19.422996 ignition[1002]: INFO : umount: umount passed Oct 8 20:00:19.422996 ignition[1002]: INFO : Ignition finished successfully Oct 8 20:00:19.417192 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:00:19.420213 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:00:19.420636 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:00:19.420760 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:00:19.422630 systemd[1]: Stopped target network.target - Network. Oct 8 20:00:19.423838 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:00:19.423909 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:00:19.425438 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:00:19.425492 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:00:19.427043 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:00:19.427086 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:00:19.429106 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:00:19.429152 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:00:19.430993 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:00:19.432631 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:00:19.440764 systemd-networkd[770]: eth0: DHCPv6 lease lost Oct 8 20:00:19.440836 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:00:19.440968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:00:19.443327 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:00:19.443435 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:00:19.446183 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:00:19.446236 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:19.456824 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:00:19.457669 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:00:19.457754 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:00:19.459736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:00:19.459781 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:19.461628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:00:19.461674 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:19.463755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:00:19.463801 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:19.466031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:19.475203 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:00:19.476713 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:00:19.487374 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:00:19.487523 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:19.489766 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:00:19.489805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:19.490926 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:00:19.490959 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:19.492933 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:00:19.492981 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:00:19.495506 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:00:19.495549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:00:19.497399 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:00:19.497451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:19.516834 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:00:19.517833 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:00:19.517888 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:19.519955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:00:19.519998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:19.522076 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:00:19.522154 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:00:19.523850 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:00:19.523915 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:00:19.526255 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:00:19.527552 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:00:19.527612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:00:19.530001 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:00:19.538557 systemd[1]: Switching root. Oct 8 20:00:19.568671 systemd-journald[238]: Journal stopped Oct 8 20:00:20.250792 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 8 20:00:20.250854 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:00:20.250866 kernel: SELinux: policy capability open_perms=1 Oct 8 20:00:20.250876 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:00:20.250885 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:00:20.250897 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:00:20.250907 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:00:20.250920 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:00:20.250930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:00:20.250939 kernel: audit: type=1403 audit(1728417619.720:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:00:20.250950 systemd[1]: Successfully loaded SELinux policy in 30.903ms. Oct 8 20:00:20.250970 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.015ms. Oct 8 20:00:20.250982 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:00:20.250993 systemd[1]: Detected virtualization kvm. Oct 8 20:00:20.251003 systemd[1]: Detected architecture arm64. Oct 8 20:00:20.251013 systemd[1]: Detected first boot. Oct 8 20:00:20.251026 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:00:20.251036 zram_generator::config[1049]: No configuration found. Oct 8 20:00:20.251048 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:00:20.251058 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:00:20.251068 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:00:20.251079 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:00:20.251090 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:00:20.251100 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:00:20.251112 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:00:20.251123 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:00:20.251134 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:00:20.251145 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:00:20.251155 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:00:20.251165 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:00:20.251176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:20.251187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:20.251198 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:00:20.251210 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:00:20.251233 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:00:20.251244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:00:20.251256 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 20:00:20.251266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:20.251277 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:00:20.251287 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:00:20.251298 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:00:20.251311 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:00:20.251322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:20.251333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:00:20.251343 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:00:20.251354 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:00:20.251364 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:00:20.251375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:00:20.251386 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:20.251398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:20.251408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:20.251419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:00:20.251430 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:00:20.251451 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:00:20.251462 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:00:20.251473 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:00:20.251483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:00:20.251493 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:00:20.251506 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:00:20.251517 systemd[1]: Reached target machines.target - Containers. Oct 8 20:00:20.251527 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:00:20.251542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:20.251553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:00:20.251566 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:00:20.251585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:20.251597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:00:20.251608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:20.251621 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:00:20.251632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:20.251642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:00:20.251653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:00:20.251667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:00:20.252380 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:00:20.252403 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:00:20.252414 kernel: fuse: init (API version 7.39) Oct 8 20:00:20.252430 kernel: loop: module loaded Oct 8 20:00:20.252440 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:00:20.252462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:00:20.252473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:00:20.252484 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:00:20.252518 systemd-journald[1116]: Collecting audit messages is disabled. Oct 8 20:00:20.252543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:00:20.252554 kernel: ACPI: bus type drm_connector registered Oct 8 20:00:20.252566 systemd-journald[1116]: Journal started Oct 8 20:00:20.252587 systemd-journald[1116]: Runtime Journal (/run/log/journal/f7339d87e89e460c9c470422429959e4) is 5.9M, max 47.3M, 41.4M free. Oct 8 20:00:20.058174 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:00:20.077576 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 20:00:20.077913 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:00:20.254159 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:00:20.254187 systemd[1]: Stopped verity-setup.service. Oct 8 20:00:20.258139 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:00:20.258747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:00:20.259933 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:00:20.261097 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:00:20.262179 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:00:20.263341 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:00:20.264561 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:00:20.266760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:00:20.268200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:20.269645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:00:20.269826 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:00:20.271173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:20.271315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:20.272663 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:00:20.272809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:00:20.275018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:20.275158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:20.276559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:00:20.276700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:00:20.277969 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:20.278097 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:20.279414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:20.280800 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:00:20.282419 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:00:20.293608 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:00:20.303802 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:00:20.305771 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:00:20.306828 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:00:20.306866 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:00:20.308735 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:00:20.310810 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:00:20.312803 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:00:20.313878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:20.315857 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:00:20.317880 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:00:20.319121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:00:20.322835 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:00:20.323949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:00:20.326860 systemd-journald[1116]: Time spent on flushing to /var/log/journal/f7339d87e89e460c9c470422429959e4 is 30.018ms for 856 entries. Oct 8 20:00:20.326860 systemd-journald[1116]: System Journal (/var/log/journal/f7339d87e89e460c9c470422429959e4) is 8.0M, max 195.6M, 187.6M free. Oct 8 20:00:20.372504 systemd-journald[1116]: Received client request to flush runtime journal. Oct 8 20:00:20.372552 kernel: loop0: detected capacity change from 0 to 189592 Oct 8 20:00:20.372574 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:00:20.326910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:00:20.331806 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:00:20.338289 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:00:20.341268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:20.343960 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:00:20.345238 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:00:20.346814 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:00:20.351115 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:00:20.356170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:20.358607 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:00:20.372864 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:00:20.377886 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:00:20.382401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:00:20.385171 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:00:20.392475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:00:20.393144 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:00:20.399741 kernel: loop1: detected capacity change from 0 to 114328 Oct 8 20:00:20.400949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:00:20.402586 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 20:00:20.421279 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Oct 8 20:00:20.421293 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Oct 8 20:00:20.426251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:20.437714 kernel: loop2: detected capacity change from 0 to 114432 Oct 8 20:00:20.477707 kernel: loop3: detected capacity change from 0 to 189592 Oct 8 20:00:20.482726 kernel: loop4: detected capacity change from 0 to 114328 Oct 8 20:00:20.486771 kernel: loop5: detected capacity change from 0 to 114432 Oct 8 20:00:20.489981 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 20:00:20.490610 (sd-merge)[1187]: Merged extensions into '/usr'. Oct 8 20:00:20.493810 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:00:20.493824 systemd[1]: Reloading... Oct 8 20:00:20.541712 zram_generator::config[1212]: No configuration found. Oct 8 20:00:20.589362 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:00:20.633706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:20.668948 systemd[1]: Reloading finished in 174 ms. Oct 8 20:00:20.704715 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:00:20.706286 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:00:20.723085 systemd[1]: Starting ensure-sysext.service... Oct 8 20:00:20.725039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:00:20.732862 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:00:20.732875 systemd[1]: Reloading... Oct 8 20:00:20.741466 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:00:20.741857 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:00:20.742500 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:00:20.742738 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Oct 8 20:00:20.742792 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Oct 8 20:00:20.744977 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:00:20.744991 systemd-tmpfiles[1250]: Skipping /boot Oct 8 20:00:20.752038 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:00:20.752056 systemd-tmpfiles[1250]: Skipping /boot Oct 8 20:00:20.779698 zram_generator::config[1278]: No configuration found. Oct 8 20:00:20.859553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:20.894705 systemd[1]: Reloading finished in 161 ms. Oct 8 20:00:20.912710 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:00:20.920111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:20.927485 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:00:20.929867 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:00:20.932201 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:00:20.937935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:00:20.941232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:20.946757 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:00:20.950641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:20.951976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:20.956994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:20.962417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:20.963523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:20.967935 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:00:20.969892 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:00:20.970863 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Oct 8 20:00:20.972204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:20.972627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:20.974396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:20.974543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:20.976564 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:20.976691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:20.987449 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:00:20.990651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:20.992418 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:00:20.996646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:21.011756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:21.016944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:00:21.020224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:21.023371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:21.024552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:21.029088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:00:21.032832 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:00:21.034743 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:00:21.035547 augenrules[1371]: No rules Oct 8 20:00:21.035653 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:00:21.037334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:21.037508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:21.041180 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:00:21.041790 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:00:21.043538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:21.043671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:21.046520 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:21.046690 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1356) Oct 8 20:00:21.047721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:21.049697 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1356) Oct 8 20:00:21.052732 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:00:21.055237 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:00:21.059537 systemd[1]: Finished ensure-sysext.service. Oct 8 20:00:21.065738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1351) Oct 8 20:00:21.077745 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 20:00:21.088828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:00:21.088891 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:00:21.107306 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:00:21.109626 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:00:21.110918 systemd-resolved[1318]: Positive Trust Anchors: Oct 8 20:00:21.110934 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:00:21.110966 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:00:21.120330 systemd-resolved[1318]: Defaulting to hostname 'linux'. Oct 8 20:00:21.122840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:00:21.124516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:00:21.125893 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:21.138789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:00:21.145067 systemd-networkd[1372]: lo: Link UP Oct 8 20:00:21.145074 systemd-networkd[1372]: lo: Gained carrier Oct 8 20:00:21.147537 systemd-networkd[1372]: Enumeration completed Oct 8 20:00:21.148367 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:21.148379 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:00:21.149404 systemd-networkd[1372]: eth0: Link UP Oct 8 20:00:21.149416 systemd-networkd[1372]: eth0: Gained carrier Oct 8 20:00:21.149431 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:21.149478 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:00:21.151332 systemd[1]: Reached target network.target - Network. Oct 8 20:00:21.161919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:00:21.166018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:21.169750 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:00:21.173832 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:00:21.174620 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 20:00:21.174660 systemd-timesyncd[1392]: Initial clock synchronization to Tue 2024-10-08 20:00:21.011613 UTC. Oct 8 20:00:21.175606 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:00:21.177358 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:00:21.185918 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:00:21.201611 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:00:21.206396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:21.228718 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:00:21.230090 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:21.231187 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:00:21.232283 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:00:21.233497 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:00:21.234887 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:00:21.236025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:00:21.237351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:00:21.238551 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:00:21.238589 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:00:21.239489 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:00:21.241067 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:00:21.243290 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:00:21.250450 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:00:21.252528 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:00:21.254088 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:00:21.255220 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:00:21.256204 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:00:21.257188 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:00:21.257222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:00:21.258054 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:00:21.259235 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:00:21.259971 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:00:21.262803 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:00:21.264897 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:00:21.266143 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:00:21.269859 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:00:21.275897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:00:21.277884 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:00:21.280294 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:00:21.283743 jq[1415]: false Oct 8 20:00:21.286860 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:00:21.292142 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:00:21.292523 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:00:21.292597 extend-filesystems[1416]: Found loop3 Oct 8 20:00:21.292597 extend-filesystems[1416]: Found loop4 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found loop5 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda1 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda2 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda3 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found usr Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda4 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda6 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda7 Oct 8 20:00:21.294192 extend-filesystems[1416]: Found vda9 Oct 8 20:00:21.294192 extend-filesystems[1416]: Checking size of /dev/vda9 Oct 8 20:00:21.296250 dbus-daemon[1414]: [system] SELinux support is enabled Oct 8 20:00:21.302018 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:00:21.306822 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:00:21.307106 extend-filesystems[1416]: Resized partition /dev/vda9 Oct 8 20:00:21.308803 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:00:21.314585 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:00:21.315418 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:00:21.316959 jq[1434]: true Oct 8 20:00:21.319044 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 20:00:21.321052 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:00:21.321201 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:00:21.321429 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:00:21.321578 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:00:21.323434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1357) Oct 8 20:00:21.324989 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:00:21.325147 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:00:21.344699 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 20:00:21.350853 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:00:21.350894 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:00:21.352205 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:00:21.365326 jq[1440]: true Oct 8 20:00:21.352220 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:00:21.358842 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:00:21.366456 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 20:00:21.366456 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 20:00:21.366456 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 20:00:21.367075 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:00:21.370578 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Oct 8 20:00:21.367813 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:00:21.375112 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 20:00:21.375351 systemd-logind[1425]: New seat seat0. Oct 8 20:00:21.376145 tar[1439]: linux-arm64/helm Oct 8 20:00:21.378268 update_engine[1431]: I20241008 20:00:21.376672 1431 main.cc:92] Flatcar Update Engine starting Oct 8 20:00:21.378535 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:00:21.386457 update_engine[1431]: I20241008 20:00:21.385893 1431 update_check_scheduler.cc:74] Next update check in 5m11s Oct 8 20:00:21.386202 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:00:21.398295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:00:21.432825 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:00:21.434333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:00:21.436842 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 20:00:21.447730 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:00:21.566148 containerd[1444]: time="2024-10-08T20:00:21.566060120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:00:21.592969 containerd[1444]: time="2024-10-08T20:00:21.592931400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594191640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594223040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594238640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594366880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594384480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594430520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594535 containerd[1444]: time="2024-10-08T20:00:21.594455880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594722 containerd[1444]: time="2024-10-08T20:00:21.594604000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594722 containerd[1444]: time="2024-10-08T20:00:21.594619680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594722 containerd[1444]: time="2024-10-08T20:00:21.594632040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594722 containerd[1444]: time="2024-10-08T20:00:21.594642800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594797 containerd[1444]: time="2024-10-08T20:00:21.594734680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.594938 containerd[1444]: time="2024-10-08T20:00:21.594917120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:21.595037 containerd[1444]: time="2024-10-08T20:00:21.595020880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:21.595069 containerd[1444]: time="2024-10-08T20:00:21.595036760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:00:21.595124 containerd[1444]: time="2024-10-08T20:00:21.595110400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:00:21.595172 containerd[1444]: time="2024-10-08T20:00:21.595160200Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:00:21.599027 containerd[1444]: time="2024-10-08T20:00:21.598985200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:00:21.599095 containerd[1444]: time="2024-10-08T20:00:21.599044320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:00:21.599095 containerd[1444]: time="2024-10-08T20:00:21.599069280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:00:21.599095 containerd[1444]: time="2024-10-08T20:00:21.599084240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:00:21.599152 containerd[1444]: time="2024-10-08T20:00:21.599097720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:00:21.599265 containerd[1444]: time="2024-10-08T20:00:21.599244280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:00:21.599488 containerd[1444]: time="2024-10-08T20:00:21.599471960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:00:21.599586 containerd[1444]: time="2024-10-08T20:00:21.599569760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:00:21.599611 containerd[1444]: time="2024-10-08T20:00:21.599590600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:00:21.599611 containerd[1444]: time="2024-10-08T20:00:21.599603840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:00:21.599654 containerd[1444]: time="2024-10-08T20:00:21.599616520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599654 containerd[1444]: time="2024-10-08T20:00:21.599629280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599654 containerd[1444]: time="2024-10-08T20:00:21.599641400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599716 containerd[1444]: time="2024-10-08T20:00:21.599654640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599716 containerd[1444]: time="2024-10-08T20:00:21.599668240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599716 containerd[1444]: time="2024-10-08T20:00:21.599699280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599716 containerd[1444]: time="2024-10-08T20:00:21.599713240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599795 containerd[1444]: time="2024-10-08T20:00:21.599724760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:00:21.599795 containerd[1444]: time="2024-10-08T20:00:21.599743360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599795 containerd[1444]: time="2024-10-08T20:00:21.599756480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599795 containerd[1444]: time="2024-10-08T20:00:21.599768280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599795 containerd[1444]: time="2024-10-08T20:00:21.599779920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599795 containerd[1444]: time="2024-10-08T20:00:21.599791440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599804360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599815560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599832280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599845360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599858520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599870720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599889320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.599906 containerd[1444]: time="2024-10-08T20:00:21.599901280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.600039 containerd[1444]: time="2024-10-08T20:00:21.599916760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:00:21.600039 containerd[1444]: time="2024-10-08T20:00:21.599935440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.600039 containerd[1444]: time="2024-10-08T20:00:21.599946760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.600039 containerd[1444]: time="2024-10-08T20:00:21.599956880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:00:21.600114 containerd[1444]: time="2024-10-08T20:00:21.600061520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:00:21.600114 containerd[1444]: time="2024-10-08T20:00:21.600077360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:00:21.600114 containerd[1444]: time="2024-10-08T20:00:21.600087760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:00:21.600114 containerd[1444]: time="2024-10-08T20:00:21.600098480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:00:21.600114 containerd[1444]: time="2024-10-08T20:00:21.600107160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.600202 containerd[1444]: time="2024-10-08T20:00:21.600118560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:00:21.600202 containerd[1444]: time="2024-10-08T20:00:21.600127680Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:00:21.600202 containerd[1444]: time="2024-10-08T20:00:21.600137560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:00:21.600547 containerd[1444]: time="2024-10-08T20:00:21.600488000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:00:21.600547 containerd[1444]: time="2024-10-08T20:00:21.600548840Z" level=info msg="Connect containerd service" Oct 8 20:00:21.600699 containerd[1444]: time="2024-10-08T20:00:21.600572560Z" level=info msg="using legacy CRI server" Oct 8 20:00:21.600699 containerd[1444]: time="2024-10-08T20:00:21.600579280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:00:21.600699 containerd[1444]: time="2024-10-08T20:00:21.600650520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:00:21.601285 containerd[1444]: time="2024-10-08T20:00:21.601259400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:00:21.601480 containerd[1444]: time="2024-10-08T20:00:21.601452200Z" level=info msg="Start subscribing containerd event" Oct 8 20:00:21.601526 containerd[1444]: time="2024-10-08T20:00:21.601493920Z" level=info msg="Start recovering state" Oct 8 20:00:21.601689 containerd[1444]: time="2024-10-08T20:00:21.601548000Z" level=info msg="Start event monitor" Oct 8 20:00:21.601689 containerd[1444]: time="2024-10-08T20:00:21.601561840Z" level=info msg="Start snapshots syncer" Oct 8 20:00:21.601689 containerd[1444]: time="2024-10-08T20:00:21.601571280Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:00:21.601689 containerd[1444]: time="2024-10-08T20:00:21.601578440Z" level=info msg="Start streaming server" Oct 8 20:00:21.602178 containerd[1444]: time="2024-10-08T20:00:21.602160720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:00:21.602227 containerd[1444]: time="2024-10-08T20:00:21.602204760Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:00:21.603491 containerd[1444]: time="2024-10-08T20:00:21.602250600Z" level=info msg="containerd successfully booted in 0.037497s" Oct 8 20:00:21.602321 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:00:21.717605 tar[1439]: linux-arm64/LICENSE Oct 8 20:00:21.717605 tar[1439]: linux-arm64/README.md Oct 8 20:00:21.730133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:00:22.282949 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:00:22.302725 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:00:22.309913 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:00:22.315010 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:00:22.315757 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:00:22.318271 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:00:22.328123 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:00:22.342907 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:00:22.344808 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 20:00:22.346026 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:00:22.728925 systemd-networkd[1372]: eth0: Gained IPv6LL Oct 8 20:00:22.731377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:00:22.733455 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:00:22.747043 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 20:00:22.751563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:22.753523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:00:22.778350 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:00:22.781266 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 20:00:22.781426 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 20:00:22.783386 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:00:23.231440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:23.232984 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:00:23.234343 systemd[1]: Startup finished in 555ms (kernel) + 5.008s (initrd) + 3.548s (userspace) = 9.112s. Oct 8 20:00:23.234965 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:23.643886 kubelet[1527]: E1008 20:00:23.643784 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:23.646319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:23.646462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:27.245295 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:00:27.246366 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:58494.service - OpenSSH per-connection server daemon (10.0.0.1:58494). Oct 8 20:00:27.308382 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 58494 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:27.310135 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:27.322420 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:00:27.333867 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:00:27.335727 systemd-logind[1425]: New session 1 of user core. Oct 8 20:00:27.341803 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:00:27.343684 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:00:27.349476 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:00:27.419997 systemd[1544]: Queued start job for default target default.target. Oct 8 20:00:27.427537 systemd[1544]: Created slice app.slice - User Application Slice. Oct 8 20:00:27.427580 systemd[1544]: Reached target paths.target - Paths. Oct 8 20:00:27.427591 systemd[1544]: Reached target timers.target - Timers. Oct 8 20:00:27.428651 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:00:27.437060 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:00:27.437114 systemd[1544]: Reached target sockets.target - Sockets. Oct 8 20:00:27.437126 systemd[1544]: Reached target basic.target - Basic System. Oct 8 20:00:27.437158 systemd[1544]: Reached target default.target - Main User Target. Oct 8 20:00:27.437181 systemd[1544]: Startup finished in 83ms. Oct 8 20:00:27.437360 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:00:27.438519 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:00:27.495532 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:58504.service - OpenSSH per-connection server daemon (10.0.0.1:58504). Oct 8 20:00:27.533181 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 58504 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:27.534451 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:27.538762 systemd-logind[1425]: New session 2 of user core. Oct 8 20:00:27.549827 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:00:27.600442 sshd[1555]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:27.614828 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:58504.service: Deactivated successfully. Oct 8 20:00:27.615956 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:00:27.618816 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:00:27.619930 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:58516.service - OpenSSH per-connection server daemon (10.0.0.1:58516). Oct 8 20:00:27.620509 systemd-logind[1425]: Removed session 2. Oct 8 20:00:27.657628 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 58516 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:27.659748 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:27.663066 systemd-logind[1425]: New session 3 of user core. Oct 8 20:00:27.670809 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:00:27.717806 sshd[1562]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:27.738944 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:58516.service: Deactivated successfully. Oct 8 20:00:27.740252 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:00:27.741416 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:00:27.742517 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:58522.service - OpenSSH per-connection server daemon (10.0.0.1:58522). Oct 8 20:00:27.743329 systemd-logind[1425]: Removed session 3. Oct 8 20:00:27.780936 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 58522 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:27.782146 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:27.785283 systemd-logind[1425]: New session 4 of user core. Oct 8 20:00:27.793796 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:00:27.845037 sshd[1569]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:27.857886 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:58522.service: Deactivated successfully. Oct 8 20:00:27.859240 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:00:27.861708 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:00:27.862884 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). Oct 8 20:00:27.863639 systemd-logind[1425]: Removed session 4. Oct 8 20:00:27.900398 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:27.901565 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:27.905218 systemd-logind[1425]: New session 5 of user core. Oct 8 20:00:27.914844 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:00:27.976985 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:00:27.979406 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:27.999876 sudo[1579]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:28.001956 sshd[1576]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:28.010906 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:58526.service: Deactivated successfully. Oct 8 20:00:28.012524 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:00:28.015872 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:00:28.032052 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:58536.service - OpenSSH per-connection server daemon (10.0.0.1:58536). Oct 8 20:00:28.033617 systemd-logind[1425]: Removed session 5. Oct 8 20:00:28.067511 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 58536 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:28.068794 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:28.072706 systemd-logind[1425]: New session 6 of user core. Oct 8 20:00:28.078820 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:00:28.129213 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:00:28.129477 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:28.132149 sudo[1588]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:28.136457 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:00:28.136754 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:28.150936 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:00:28.151874 auditctl[1591]: No rules Oct 8 20:00:28.152630 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:00:28.152836 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:00:28.154258 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:00:28.175799 augenrules[1609]: No rules Oct 8 20:00:28.176885 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:00:28.178408 sudo[1587]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:28.179884 sshd[1584]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:28.196956 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:58536.service: Deactivated successfully. Oct 8 20:00:28.198246 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:00:28.200745 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:00:28.201734 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:58540.service - OpenSSH per-connection server daemon (10.0.0.1:58540). Oct 8 20:00:28.202349 systemd-logind[1425]: Removed session 6. Oct 8 20:00:28.239259 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 58540 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:00:28.240324 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:28.243953 systemd-logind[1425]: New session 7 of user core. Oct 8 20:00:28.253815 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:00:28.304042 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:00:28.304445 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:28.599901 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:00:28.600008 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:00:28.853711 dockerd[1638]: time="2024-10-08T20:00:28.853570813Z" level=info msg="Starting up" Oct 8 20:00:29.026035 dockerd[1638]: time="2024-10-08T20:00:29.025985209Z" level=info msg="Loading containers: start." Oct 8 20:00:29.121693 kernel: Initializing XFRM netlink socket Oct 8 20:00:29.189044 systemd-networkd[1372]: docker0: Link UP Oct 8 20:00:29.205943 dockerd[1638]: time="2024-10-08T20:00:29.205893423Z" level=info msg="Loading containers: done." Oct 8 20:00:29.223266 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2404892707-merged.mount: Deactivated successfully. Oct 8 20:00:29.225985 dockerd[1638]: time="2024-10-08T20:00:29.225537008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:00:29.225985 dockerd[1638]: time="2024-10-08T20:00:29.225647556Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:00:29.225985 dockerd[1638]: time="2024-10-08T20:00:29.225771914Z" level=info msg="Daemon has completed initialization" Oct 8 20:00:29.256974 dockerd[1638]: time="2024-10-08T20:00:29.256851535Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:00:29.257020 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:00:29.636712 containerd[1444]: time="2024-10-08T20:00:29.636577961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 20:00:30.399553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740491866.mount: Deactivated successfully. Oct 8 20:00:31.892597 containerd[1444]: time="2024-10-08T20:00:31.892480871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:31.893471 containerd[1444]: time="2024-10-08T20:00:31.893167361Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691523" Oct 8 20:00:31.894272 containerd[1444]: time="2024-10-08T20:00:31.894232320Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:31.897237 containerd[1444]: time="2024-10-08T20:00:31.897200114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:31.898523 containerd[1444]: time="2024-10-08T20:00:31.898495414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 2.261863662s" Oct 8 20:00:31.898564 containerd[1444]: time="2024-10-08T20:00:31.898530796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 8 20:00:31.899362 containerd[1444]: time="2024-10-08T20:00:31.899335756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 20:00:33.208486 containerd[1444]: time="2024-10-08T20:00:33.208428458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:33.208895 containerd[1444]: time="2024-10-08T20:00:33.208777016Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460088" Oct 8 20:00:33.209736 containerd[1444]: time="2024-10-08T20:00:33.209707314Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:33.212744 containerd[1444]: time="2024-10-08T20:00:33.212706785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:33.213788 containerd[1444]: time="2024-10-08T20:00:33.213758869Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.314388514s" Oct 8 20:00:33.213820 containerd[1444]: time="2024-10-08T20:00:33.213792312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 8 20:00:33.214786 containerd[1444]: time="2024-10-08T20:00:33.214756490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 20:00:33.702423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:00:33.712847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:33.797697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:33.801057 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:33.845201 kubelet[1852]: E1008 20:00:33.845148 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:33.848433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:33.848586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:34.640418 containerd[1444]: time="2024-10-08T20:00:34.640366043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:34.641428 containerd[1444]: time="2024-10-08T20:00:34.641389707Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018560" Oct 8 20:00:34.642155 containerd[1444]: time="2024-10-08T20:00:34.642126790Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:34.646117 containerd[1444]: time="2024-10-08T20:00:34.646043258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:34.647149 containerd[1444]: time="2024-10-08T20:00:34.647103411Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 1.432314747s" Oct 8 20:00:34.647149 containerd[1444]: time="2024-10-08T20:00:34.647141693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 8 20:00:34.647851 containerd[1444]: time="2024-10-08T20:00:34.647824122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 20:00:35.701885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632815830.mount: Deactivated successfully. Oct 8 20:00:35.926090 containerd[1444]: time="2024-10-08T20:00:35.926026441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:35.926957 containerd[1444]: time="2024-10-08T20:00:35.926910018Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753317" Oct 8 20:00:35.927592 containerd[1444]: time="2024-10-08T20:00:35.927568526Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:35.929979 containerd[1444]: time="2024-10-08T20:00:35.929938429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:35.930344 containerd[1444]: time="2024-10-08T20:00:35.930317184Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 1.282460352s" Oct 8 20:00:35.930373 containerd[1444]: time="2024-10-08T20:00:35.930350424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 8 20:00:35.931161 containerd[1444]: time="2024-10-08T20:00:35.931125472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:00:36.534354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111900508.mount: Deactivated successfully. Oct 8 20:00:37.259499 containerd[1444]: time="2024-10-08T20:00:37.259449918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.260138 containerd[1444]: time="2024-10-08T20:00:37.260103397Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 20:00:37.260986 containerd[1444]: time="2024-10-08T20:00:37.260962110Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.263817 containerd[1444]: time="2024-10-08T20:00:37.263765226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.264936 containerd[1444]: time="2024-10-08T20:00:37.264903409Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.333743095s" Oct 8 20:00:37.265000 containerd[1444]: time="2024-10-08T20:00:37.264938153Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 20:00:37.265507 containerd[1444]: time="2024-10-08T20:00:37.265413125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 20:00:37.702480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072129348.mount: Deactivated successfully. Oct 8 20:00:37.706053 containerd[1444]: time="2024-10-08T20:00:37.706012581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.707151 containerd[1444]: time="2024-10-08T20:00:37.707118493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 8 20:00:37.707922 containerd[1444]: time="2024-10-08T20:00:37.707861007Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.709947 containerd[1444]: time="2024-10-08T20:00:37.709886785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.710758 containerd[1444]: time="2024-10-08T20:00:37.710726231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 445.21043ms" Oct 8 20:00:37.710818 containerd[1444]: time="2024-10-08T20:00:37.710758981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 8 20:00:37.711391 containerd[1444]: time="2024-10-08T20:00:37.711271608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 20:00:38.329662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931626321.mount: Deactivated successfully. Oct 8 20:00:40.506070 containerd[1444]: time="2024-10-08T20:00:40.505735651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:40.507107 containerd[1444]: time="2024-10-08T20:00:40.507064361Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868194" Oct 8 20:00:40.508017 containerd[1444]: time="2024-10-08T20:00:40.507947572Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:40.511560 containerd[1444]: time="2024-10-08T20:00:40.511512117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:40.512837 containerd[1444]: time="2024-10-08T20:00:40.512782494Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.801468275s" Oct 8 20:00:40.512837 containerd[1444]: time="2024-10-08T20:00:40.512815792Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 8 20:00:43.953311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:00:43.967928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:44.057663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:44.061095 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:44.094687 kubelet[2005]: E1008 20:00:44.094638 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:44.096953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:44.097250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:44.258235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:44.265888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:44.286117 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... Oct 8 20:00:44.286131 systemd[1]: Reloading... Oct 8 20:00:44.342781 zram_generator::config[2057]: No configuration found. Oct 8 20:00:44.477278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:44.528526 systemd[1]: Reloading finished in 242 ms. Oct 8 20:00:44.570464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:44.572301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:44.574025 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:00:44.574196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:44.575521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:44.668904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:44.673158 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:00:44.707252 kubelet[2106]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:00:44.707252 kubelet[2106]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:00:44.707252 kubelet[2106]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:00:44.707542 kubelet[2106]: I1008 20:00:44.707446 2106 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:00:45.401623 kubelet[2106]: I1008 20:00:45.400899 2106 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:00:45.401623 kubelet[2106]: I1008 20:00:45.400929 2106 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:00:45.401623 kubelet[2106]: I1008 20:00:45.401302 2106 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:00:45.446268 kubelet[2106]: E1008 20:00:45.446234 2106 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:45.446926 kubelet[2106]: I1008 20:00:45.446914 2106 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:00:45.452801 kubelet[2106]: E1008 20:00:45.452621 2106 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:00:45.452801 kubelet[2106]: I1008 20:00:45.452649 2106 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:00:45.457716 kubelet[2106]: I1008 20:00:45.457697 2106 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:00:45.458575 kubelet[2106]: I1008 20:00:45.458556 2106 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:00:45.458805 kubelet[2106]: I1008 20:00:45.458783 2106 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:00:45.459027 kubelet[2106]: I1008 20:00:45.458871 2106 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:00:45.459305 kubelet[2106]: I1008 20:00:45.459292 2106 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:00:45.459362 kubelet[2106]: I1008 20:00:45.459354 2106 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:00:45.459824 kubelet[2106]: I1008 20:00:45.459565 2106 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:00:45.461344 kubelet[2106]: I1008 20:00:45.461326 2106 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:00:45.461434 kubelet[2106]: I1008 20:00:45.461423 2106 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:00:45.461623 kubelet[2106]: I1008 20:00:45.461609 2106 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:00:45.461713 kubelet[2106]: I1008 20:00:45.461703 2106 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:00:45.466953 kubelet[2106]: W1008 20:00:45.466898 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:45.467020 kubelet[2106]: E1008 20:00:45.466967 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:45.467020 kubelet[2106]: W1008 20:00:45.466958 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:45.467063 kubelet[2106]: E1008 20:00:45.467023 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:45.467063 kubelet[2106]: I1008 20:00:45.467048 2106 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:00:45.468837 kubelet[2106]: I1008 20:00:45.468820 2106 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:00:45.469603 kubelet[2106]: W1008 20:00:45.469584 2106 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:00:45.471077 kubelet[2106]: I1008 20:00:45.471053 2106 server.go:1269] "Started kubelet" Oct 8 20:00:45.471625 kubelet[2106]: I1008 20:00:45.471402 2106 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:00:45.471856 kubelet[2106]: I1008 20:00:45.471808 2106 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:00:45.472062 kubelet[2106]: I1008 20:00:45.472042 2106 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:00:45.472914 kubelet[2106]: I1008 20:00:45.472891 2106 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:00:45.473701 kubelet[2106]: I1008 20:00:45.472988 2106 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:00:45.473701 kubelet[2106]: I1008 20:00:45.473059 2106 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:00:45.474101 kubelet[2106]: E1008 20:00:45.474087 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:45.474381 kubelet[2106]: I1008 20:00:45.474371 2106 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:00:45.474566 kubelet[2106]: I1008 20:00:45.474555 2106 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:00:45.474713 kubelet[2106]: I1008 20:00:45.474703 2106 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:00:45.474898 kubelet[2106]: E1008 20:00:45.474858 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Oct 8 20:00:45.475167 kubelet[2106]: W1008 20:00:45.475122 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:45.475267 kubelet[2106]: E1008 20:00:45.475249 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:45.476073 kubelet[2106]: E1008 20:00:45.476050 2106 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:00:45.476831 kubelet[2106]: I1008 20:00:45.476747 2106 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:00:45.476831 kubelet[2106]: I1008 20:00:45.476765 2106 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:00:45.476831 kubelet[2106]: I1008 20:00:45.476822 2106 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:00:45.477018 kubelet[2106]: E1008 20:00:45.475813 2106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92b0e5d162d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:00:45.471032021 +0000 UTC m=+0.795029459,LastTimestamp:2024-10-08 20:00:45.471032021 +0000 UTC m=+0.795029459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:00:45.487251 kubelet[2106]: I1008 20:00:45.487213 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:00:45.488308 kubelet[2106]: I1008 20:00:45.488185 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:00:45.488308 kubelet[2106]: I1008 20:00:45.488214 2106 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:00:45.488308 kubelet[2106]: I1008 20:00:45.488227 2106 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:00:45.488308 kubelet[2106]: E1008 20:00:45.488263 2106 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:00:45.488593 kubelet[2106]: W1008 20:00:45.488552 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:45.488639 kubelet[2106]: E1008 20:00:45.488600 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:45.488639 kubelet[2106]: I1008 20:00:45.488633 2106 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:00:45.488744 kubelet[2106]: I1008 20:00:45.488643 2106 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:00:45.488744 kubelet[2106]: I1008 20:00:45.488658 2106 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:00:45.574735 kubelet[2106]: E1008 20:00:45.574688 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:45.589189 kubelet[2106]: E1008 20:00:45.589149 2106 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:00:45.675563 kubelet[2106]: E1008 20:00:45.675283 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:45.676187 kubelet[2106]: E1008 20:00:45.675786 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Oct 8 20:00:45.678502 kubelet[2106]: I1008 20:00:45.678379 2106 policy_none.go:49] "None policy: Start" Oct 8 20:00:45.679088 kubelet[2106]: I1008 20:00:45.679072 2106 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:00:45.679162 kubelet[2106]: I1008 20:00:45.679098 2106 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:00:45.684814 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:00:45.698051 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:00:45.700468 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:00:45.713384 kubelet[2106]: I1008 20:00:45.713344 2106 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:00:45.713623 kubelet[2106]: I1008 20:00:45.713519 2106 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:00:45.713623 kubelet[2106]: I1008 20:00:45.713530 2106 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:00:45.713786 kubelet[2106]: I1008 20:00:45.713757 2106 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:00:45.714807 kubelet[2106]: E1008 20:00:45.714784 2106 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 20:00:45.796893 systemd[1]: Created slice kubepods-burstable-pod509dd3c42261518bd2c219e149a24bb0.slice - libcontainer container kubepods-burstable-pod509dd3c42261518bd2c219e149a24bb0.slice. Oct 8 20:00:45.814668 kubelet[2106]: I1008 20:00:45.814632 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 20:00:45.815084 kubelet[2106]: E1008 20:00:45.815060 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Oct 8 20:00:45.822754 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 8 20:00:45.827218 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 8 20:00:45.877109 kubelet[2106]: I1008 20:00:45.877062 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:45.877360 kubelet[2106]: I1008 20:00:45.877123 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:45.877360 kubelet[2106]: I1008 20:00:45.877146 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:45.877360 kubelet[2106]: I1008 20:00:45.877163 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:45.877360 kubelet[2106]: I1008 20:00:45.877180 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/509dd3c42261518bd2c219e149a24bb0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"509dd3c42261518bd2c219e149a24bb0\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:45.877360 kubelet[2106]: I1008 20:00:45.877201 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/509dd3c42261518bd2c219e149a24bb0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"509dd3c42261518bd2c219e149a24bb0\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:45.877481 kubelet[2106]: I1008 20:00:45.877215 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/509dd3c42261518bd2c219e149a24bb0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"509dd3c42261518bd2c219e149a24bb0\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:45.877481 kubelet[2106]: I1008 20:00:45.877229 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:45.877481 kubelet[2106]: I1008 20:00:45.877243 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:00:46.016772 kubelet[2106]: I1008 20:00:46.016715 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 20:00:46.017099 kubelet[2106]: E1008 20:00:46.017058 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Oct 8 20:00:46.076519 kubelet[2106]: E1008 20:00:46.076480 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Oct 8 20:00:46.120923 kubelet[2106]: E1008 20:00:46.120837 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:46.121418 containerd[1444]: time="2024-10-08T20:00:46.121382896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:509dd3c42261518bd2c219e149a24bb0,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:46.125959 kubelet[2106]: E1008 20:00:46.125939 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:46.126572 containerd[1444]: time="2024-10-08T20:00:46.126385080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:46.129905 kubelet[2106]: E1008 20:00:46.129868 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:46.130406 containerd[1444]: time="2024-10-08T20:00:46.130180661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:46.418737 kubelet[2106]: I1008 20:00:46.418615 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 20:00:46.419203 kubelet[2106]: E1008 20:00:46.419147 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Oct 8 20:00:46.443674 kubelet[2106]: W1008 20:00:46.443619 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:46.443753 kubelet[2106]: E1008 20:00:46.443706 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:46.555348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536039837.mount: Deactivated successfully. Oct 8 20:00:46.559353 containerd[1444]: time="2024-10-08T20:00:46.559311833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:00:46.561288 containerd[1444]: time="2024-10-08T20:00:46.561247712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 20:00:46.561790 containerd[1444]: time="2024-10-08T20:00:46.561755652Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:00:46.563382 containerd[1444]: time="2024-10-08T20:00:46.563340821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:00:46.564261 containerd[1444]: time="2024-10-08T20:00:46.564232244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:00:46.565057 containerd[1444]: time="2024-10-08T20:00:46.565013039Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:00:46.565415 containerd[1444]: time="2024-10-08T20:00:46.565394763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:00:46.568023 containerd[1444]: time="2024-10-08T20:00:46.567987819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 446.528305ms" Oct 8 20:00:46.568650 containerd[1444]: time="2024-10-08T20:00:46.568617778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:00:46.570616 containerd[1444]: time="2024-10-08T20:00:46.570393389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 443.949718ms" Oct 8 20:00:46.573673 containerd[1444]: time="2024-10-08T20:00:46.573639785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 443.401772ms" Oct 8 20:00:46.591290 kubelet[2106]: W1008 20:00:46.591238 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:46.591426 kubelet[2106]: E1008 20:00:46.591401 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:46.714548 containerd[1444]: time="2024-10-08T20:00:46.714446265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:46.714548 containerd[1444]: time="2024-10-08T20:00:46.714507535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:46.715759 containerd[1444]: time="2024-10-08T20:00:46.715321702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:46.715759 containerd[1444]: time="2024-10-08T20:00:46.715381252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:46.715759 containerd[1444]: time="2024-10-08T20:00:46.715396640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:46.715759 containerd[1444]: time="2024-10-08T20:00:46.715480051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:46.715759 containerd[1444]: time="2024-10-08T20:00:46.714529996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:46.716244 containerd[1444]: time="2024-10-08T20:00:46.716098779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:46.718142 containerd[1444]: time="2024-10-08T20:00:46.717802690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:46.718142 containerd[1444]: time="2024-10-08T20:00:46.717856445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:46.718142 containerd[1444]: time="2024-10-08T20:00:46.717871273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:46.718142 containerd[1444]: time="2024-10-08T20:00:46.717959920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:46.733831 systemd[1]: Started cri-containerd-8aa7edb3a387e2a45d955fa5a13cb9543fde653cebc81200e1546338d00fcbb7.scope - libcontainer container 8aa7edb3a387e2a45d955fa5a13cb9543fde653cebc81200e1546338d00fcbb7. Oct 8 20:00:46.737712 systemd[1]: Started cri-containerd-aca66889e9fc6d00bb2c3b6a5819da4640881a5b66e2e4217df80fa833787d78.scope - libcontainer container aca66889e9fc6d00bb2c3b6a5819da4640881a5b66e2e4217df80fa833787d78. Oct 8 20:00:46.738740 systemd[1]: Started cri-containerd-d447a97b5b758fc2ce463af455d9eeb394710fa7bb086f9b466995b94fb59244.scope - libcontainer container d447a97b5b758fc2ce463af455d9eeb394710fa7bb086f9b466995b94fb59244. Oct 8 20:00:46.743243 kubelet[2106]: W1008 20:00:46.743149 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:46.743243 kubelet[2106]: E1008 20:00:46.743210 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:46.766547 containerd[1444]: time="2024-10-08T20:00:46.766511011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aa7edb3a387e2a45d955fa5a13cb9543fde653cebc81200e1546338d00fcbb7\"" Oct 8 20:00:46.767894 kubelet[2106]: E1008 20:00:46.767872 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:46.770175 containerd[1444]: time="2024-10-08T20:00:46.770033178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:509dd3c42261518bd2c219e149a24bb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"aca66889e9fc6d00bb2c3b6a5819da4640881a5b66e2e4217df80fa833787d78\"" Oct 8 20:00:46.770255 containerd[1444]: time="2024-10-08T20:00:46.770226179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d447a97b5b758fc2ce463af455d9eeb394710fa7bb086f9b466995b94fb59244\"" Oct 8 20:00:46.770302 containerd[1444]: time="2024-10-08T20:00:46.770270822Z" level=info msg="CreateContainer within sandbox \"8aa7edb3a387e2a45d955fa5a13cb9543fde653cebc81200e1546338d00fcbb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:00:46.771026 kubelet[2106]: E1008 20:00:46.771007 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:46.771125 kubelet[2106]: E1008 20:00:46.771047 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:46.772578 containerd[1444]: time="2024-10-08T20:00:46.772377360Z" level=info msg="CreateContainer within sandbox \"aca66889e9fc6d00bb2c3b6a5819da4640881a5b66e2e4217df80fa833787d78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:00:46.772688 containerd[1444]: time="2024-10-08T20:00:46.772651693Z" level=info msg="CreateContainer within sandbox \"d447a97b5b758fc2ce463af455d9eeb394710fa7bb086f9b466995b94fb59244\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:00:46.789854 containerd[1444]: time="2024-10-08T20:00:46.789813181Z" level=info msg="CreateContainer within sandbox \"aca66889e9fc6d00bb2c3b6a5819da4640881a5b66e2e4217df80fa833787d78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b8a068f9f07f0b9b72547eca34ee0b520a459684aad9a3e5f11238c01a17386\"" Oct 8 20:00:46.790401 containerd[1444]: time="2024-10-08T20:00:46.790357131Z" level=info msg="StartContainer for \"2b8a068f9f07f0b9b72547eca34ee0b520a459684aad9a3e5f11238c01a17386\"" Oct 8 20:00:46.792767 containerd[1444]: time="2024-10-08T20:00:46.792729290Z" level=info msg="CreateContainer within sandbox \"8aa7edb3a387e2a45d955fa5a13cb9543fde653cebc81200e1546338d00fcbb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d2e1a56f21deaefe38d996982536caf2d9723a4ced3be85049d77c593a093b0\"" Oct 8 20:00:46.793180 containerd[1444]: time="2024-10-08T20:00:46.793148103Z" level=info msg="StartContainer for \"2d2e1a56f21deaefe38d996982536caf2d9723a4ced3be85049d77c593a093b0\"" Oct 8 20:00:46.794416 containerd[1444]: time="2024-10-08T20:00:46.794385480Z" level=info msg="CreateContainer within sandbox \"d447a97b5b758fc2ce463af455d9eeb394710fa7bb086f9b466995b94fb59244\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc04bb24b11e2617d514ef00cbbe4bc06d097ce671ce97861c0375f55e29ef2d\"" Oct 8 20:00:46.795016 containerd[1444]: time="2024-10-08T20:00:46.794789866Z" level=info msg="StartContainer for \"cc04bb24b11e2617d514ef00cbbe4bc06d097ce671ce97861c0375f55e29ef2d\"" Oct 8 20:00:46.819827 systemd[1]: Started cri-containerd-2b8a068f9f07f0b9b72547eca34ee0b520a459684aad9a3e5f11238c01a17386.scope - libcontainer container 2b8a068f9f07f0b9b72547eca34ee0b520a459684aad9a3e5f11238c01a17386. Oct 8 20:00:46.820858 systemd[1]: Started cri-containerd-2d2e1a56f21deaefe38d996982536caf2d9723a4ced3be85049d77c593a093b0.scope - libcontainer container 2d2e1a56f21deaefe38d996982536caf2d9723a4ced3be85049d77c593a093b0. Oct 8 20:00:46.824226 systemd[1]: Started cri-containerd-cc04bb24b11e2617d514ef00cbbe4bc06d097ce671ce97861c0375f55e29ef2d.scope - libcontainer container cc04bb24b11e2617d514ef00cbbe4bc06d097ce671ce97861c0375f55e29ef2d. Oct 8 20:00:46.865937 containerd[1444]: time="2024-10-08T20:00:46.865770009Z" level=info msg="StartContainer for \"2b8a068f9f07f0b9b72547eca34ee0b520a459684aad9a3e5f11238c01a17386\" returns successfully" Oct 8 20:00:46.865937 containerd[1444]: time="2024-10-08T20:00:46.865796547Z" level=info msg="StartContainer for \"2d2e1a56f21deaefe38d996982536caf2d9723a4ced3be85049d77c593a093b0\" returns successfully" Oct 8 20:00:46.865937 containerd[1444]: time="2024-10-08T20:00:46.865799904Z" level=info msg="StartContainer for \"cc04bb24b11e2617d514ef00cbbe4bc06d097ce671ce97861c0375f55e29ef2d\" returns successfully" Oct 8 20:00:46.869251 kubelet[2106]: W1008 20:00:46.869152 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Oct 8 20:00:46.869251 kubelet[2106]: E1008 20:00:46.869213 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:00:46.878761 kubelet[2106]: E1008 20:00:46.878728 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Oct 8 20:00:47.220968 kubelet[2106]: I1008 20:00:47.220844 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 20:00:47.496085 kubelet[2106]: E1008 20:00:47.495904 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:47.501126 kubelet[2106]: E1008 20:00:47.501025 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:47.505072 kubelet[2106]: E1008 20:00:47.504637 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:48.165415 kubelet[2106]: I1008 20:00:48.165246 2106 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 20:00:48.165415 kubelet[2106]: E1008 20:00:48.165285 2106 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 8 20:00:48.182304 kubelet[2106]: E1008 20:00:48.182269 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:48.283326 kubelet[2106]: E1008 20:00:48.283276 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:48.383879 kubelet[2106]: E1008 20:00:48.383841 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:48.484789 kubelet[2106]: E1008 20:00:48.484745 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:48.510399 kubelet[2106]: E1008 20:00:48.510363 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:48.585754 kubelet[2106]: E1008 20:00:48.585722 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:48.685846 kubelet[2106]: E1008 20:00:48.685809 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:48.696489 kubelet[2106]: E1008 20:00:48.696396 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:49.464169 kubelet[2106]: I1008 20:00:49.464114 2106 apiserver.go:52] "Watching apiserver" Oct 8 20:00:49.475685 kubelet[2106]: I1008 20:00:49.475643 2106 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 20:00:50.136775 systemd[1]: Reloading requested from client PID 2380 ('systemctl') (unit session-7.scope)... Oct 8 20:00:50.136793 systemd[1]: Reloading... Oct 8 20:00:50.200740 zram_generator::config[2422]: No configuration found. Oct 8 20:00:50.288607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:50.353646 systemd[1]: Reloading finished in 216 ms. Oct 8 20:00:50.394616 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:50.410112 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:00:50.410292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:50.410335 systemd[1]: kubelet.service: Consumed 1.129s CPU time, 117.5M memory peak, 0B memory swap peak. Oct 8 20:00:50.420232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:50.508409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:50.511787 (kubelet)[2461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:00:50.564380 kubelet[2461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:00:50.564380 kubelet[2461]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:00:50.564380 kubelet[2461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:00:50.564380 kubelet[2461]: I1008 20:00:50.564014 2461 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:00:50.570077 kubelet[2461]: I1008 20:00:50.570026 2461 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:00:50.570077 kubelet[2461]: I1008 20:00:50.570059 2461 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:00:50.570306 kubelet[2461]: I1008 20:00:50.570279 2461 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:00:50.571839 kubelet[2461]: I1008 20:00:50.571610 2461 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:00:50.574331 kubelet[2461]: I1008 20:00:50.574292 2461 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:00:50.577370 kubelet[2461]: E1008 20:00:50.577330 2461 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:00:50.577439 kubelet[2461]: I1008 20:00:50.577387 2461 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:00:50.580775 kubelet[2461]: I1008 20:00:50.580193 2461 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:00:50.581596 kubelet[2461]: I1008 20:00:50.581515 2461 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:00:50.581657 kubelet[2461]: I1008 20:00:50.581628 2461 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:00:50.582306 kubelet[2461]: I1008 20:00:50.581651 2461 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:00:50.582306 kubelet[2461]: I1008 20:00:50.582248 2461 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:00:50.582306 kubelet[2461]: I1008 20:00:50.582259 2461 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:00:50.582306 kubelet[2461]: I1008 20:00:50.582302 2461 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:00:50.582639 kubelet[2461]: I1008 20:00:50.582405 2461 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:00:50.582639 kubelet[2461]: I1008 20:00:50.582418 2461 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:00:50.582639 kubelet[2461]: I1008 20:00:50.582443 2461 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:00:50.582639 kubelet[2461]: I1008 20:00:50.582452 2461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:00:50.583797 kubelet[2461]: I1008 20:00:50.583716 2461 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:00:50.586305 kubelet[2461]: I1008 20:00:50.584531 2461 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:00:50.586305 kubelet[2461]: I1008 20:00:50.585617 2461 server.go:1269] "Started kubelet" Oct 8 20:00:50.586305 kubelet[2461]: I1008 20:00:50.585770 2461 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:00:50.586305 kubelet[2461]: I1008 20:00:50.585933 2461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:00:50.586305 kubelet[2461]: I1008 20:00:50.586165 2461 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:00:50.587407 kubelet[2461]: I1008 20:00:50.587381 2461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:00:50.589828 kubelet[2461]: I1008 20:00:50.589652 2461 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:00:50.592060 kubelet[2461]: I1008 20:00:50.590659 2461 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:00:50.592060 kubelet[2461]: I1008 20:00:50.590777 2461 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:00:50.592060 kubelet[2461]: I1008 20:00:50.590923 2461 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:00:50.592060 kubelet[2461]: E1008 20:00:50.591569 2461 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:00:50.592060 kubelet[2461]: E1008 20:00:50.591841 2461 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:50.592597 kubelet[2461]: I1008 20:00:50.592572 2461 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:00:50.599882 kubelet[2461]: I1008 20:00:50.597204 2461 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:00:50.613692 kubelet[2461]: I1008 20:00:50.611354 2461 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:00:50.613692 kubelet[2461]: I1008 20:00:50.611406 2461 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:00:50.615938 kubelet[2461]: I1008 20:00:50.615877 2461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:00:50.618442 kubelet[2461]: I1008 20:00:50.618407 2461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:00:50.618442 kubelet[2461]: I1008 20:00:50.618435 2461 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:00:50.618516 kubelet[2461]: I1008 20:00:50.618452 2461 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:00:50.618516 kubelet[2461]: E1008 20:00:50.618493 2461 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:00:50.643487 kubelet[2461]: I1008 20:00:50.643452 2461 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:00:50.643487 kubelet[2461]: I1008 20:00:50.643474 2461 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:00:50.643487 kubelet[2461]: I1008 20:00:50.643492 2461 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:00:50.643642 kubelet[2461]: I1008 20:00:50.643622 2461 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:00:50.643673 kubelet[2461]: I1008 20:00:50.643638 2461 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:00:50.643673 kubelet[2461]: I1008 20:00:50.643654 2461 policy_none.go:49] "None policy: Start" Oct 8 20:00:50.644257 kubelet[2461]: I1008 20:00:50.644226 2461 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:00:50.644257 kubelet[2461]: I1008 20:00:50.644253 2461 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:00:50.644449 kubelet[2461]: I1008 20:00:50.644423 2461 state_mem.go:75] "Updated machine memory state" Oct 8 20:00:50.648163 kubelet[2461]: I1008 20:00:50.648091 2461 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:00:50.648520 kubelet[2461]: I1008 20:00:50.648238 2461 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:00:50.648520 kubelet[2461]: I1008 20:00:50.648254 2461 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:00:50.648604 kubelet[2461]: I1008 20:00:50.648535 2461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:00:50.752619 kubelet[2461]: I1008 20:00:50.752564 2461 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 20:00:50.766456 kubelet[2461]: I1008 20:00:50.766432 2461 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 8 20:00:50.766664 kubelet[2461]: I1008 20:00:50.766570 2461 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 20:00:50.792737 kubelet[2461]: I1008 20:00:50.792481 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/509dd3c42261518bd2c219e149a24bb0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"509dd3c42261518bd2c219e149a24bb0\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:50.792737 kubelet[2461]: I1008 20:00:50.792524 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/509dd3c42261518bd2c219e149a24bb0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"509dd3c42261518bd2c219e149a24bb0\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:50.792737 kubelet[2461]: I1008 20:00:50.792545 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:50.792737 kubelet[2461]: I1008 20:00:50.792561 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:50.792737 kubelet[2461]: I1008 20:00:50.792578 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:50.792965 kubelet[2461]: I1008 20:00:50.792591 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/509dd3c42261518bd2c219e149a24bb0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"509dd3c42261518bd2c219e149a24bb0\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:50.792965 kubelet[2461]: I1008 20:00:50.792605 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:50.792965 kubelet[2461]: I1008 20:00:50.792620 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:00:50.792965 kubelet[2461]: I1008 20:00:50.792635 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:00:51.028026 kubelet[2461]: E1008 20:00:51.027993 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:51.028026 kubelet[2461]: E1008 20:00:51.028273 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:51.028834 kubelet[2461]: E1008 20:00:51.028780 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:51.133981 sudo[2499]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 20:00:51.134249 sudo[2499]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 20:00:51.580349 sudo[2499]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:51.591062 kubelet[2461]: I1008 20:00:51.589657 2461 apiserver.go:52] "Watching apiserver" Oct 8 20:00:51.630659 kubelet[2461]: E1008 20:00:51.630014 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:51.632096 kubelet[2461]: E1008 20:00:51.630850 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:51.636064 kubelet[2461]: E1008 20:00:51.635983 2461 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 20:00:51.636151 kubelet[2461]: E1008 20:00:51.636111 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:51.652119 kubelet[2461]: I1008 20:00:51.651908 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.651890681 podStartE2EDuration="1.651890681s" podCreationTimestamp="2024-10-08 20:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:00:51.65153943 +0000 UTC m=+1.135617805" watchObservedRunningTime="2024-10-08 20:00:51.651890681 +0000 UTC m=+1.135969016" Oct 8 20:00:51.660889 kubelet[2461]: I1008 20:00:51.660153 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.660139503 podStartE2EDuration="1.660139503s" podCreationTimestamp="2024-10-08 20:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:00:51.660098121 +0000 UTC m=+1.144176416" watchObservedRunningTime="2024-10-08 20:00:51.660139503 +0000 UTC m=+1.144217838" Oct 8 20:00:51.676262 kubelet[2461]: I1008 20:00:51.676217 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.676202413 podStartE2EDuration="1.676202413s" podCreationTimestamp="2024-10-08 20:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:00:51.667584267 +0000 UTC m=+1.151662642" watchObservedRunningTime="2024-10-08 20:00:51.676202413 +0000 UTC m=+1.160280748" Oct 8 20:00:51.691671 kubelet[2461]: I1008 20:00:51.691612 2461 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 20:00:52.630526 kubelet[2461]: E1008 20:00:52.630491 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:53.452748 sudo[1620]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:53.455711 sshd[1617]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:53.458012 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:58540.service: Deactivated successfully. Oct 8 20:00:53.459667 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:00:53.459875 systemd[1]: session-7.scope: Consumed 6.396s CPU time, 150.9M memory peak, 0B memory swap peak. Oct 8 20:00:53.461068 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:00:53.461979 systemd-logind[1425]: Removed session 7. Oct 8 20:00:54.651171 kubelet[2461]: E1008 20:00:54.651130 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:56.495063 kubelet[2461]: I1008 20:00:56.495025 2461 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:00:56.495406 containerd[1444]: time="2024-10-08T20:00:56.495326473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:00:56.496115 kubelet[2461]: I1008 20:00:56.495745 2461 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:00:57.191493 systemd[1]: Created slice kubepods-besteffort-podf49d75f5_627d_4276_ba1b_87e9699c9a98.slice - libcontainer container kubepods-besteffort-podf49d75f5_627d_4276_ba1b_87e9699c9a98.slice. Oct 8 20:00:57.202736 systemd[1]: Created slice kubepods-burstable-pod3e6bb1ad_171a_406a_844e_20a50f1c74c3.slice - libcontainer container kubepods-burstable-pod3e6bb1ad_171a_406a_844e_20a50f1c74c3.slice. Oct 8 20:00:57.235756 kubelet[2461]: I1008 20:00:57.235725 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-xtables-lock\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236105 kubelet[2461]: I1008 20:00:57.235891 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cni-path\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236105 kubelet[2461]: I1008 20:00:57.235919 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e6bb1ad-171a-406a-844e-20a50f1c74c3-clustermesh-secrets\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236105 kubelet[2461]: I1008 20:00:57.235937 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjg7r\" (UniqueName: \"kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-kube-api-access-mjg7r\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236105 kubelet[2461]: I1008 20:00:57.235952 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-bpf-maps\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236105 kubelet[2461]: I1008 20:00:57.235967 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-cgroup\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236105 kubelet[2461]: I1008 20:00:57.235982 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-kernel\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236300 kubelet[2461]: I1008 20:00:57.236000 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hubble-tls\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236300 kubelet[2461]: I1008 20:00:57.236041 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49d75f5-627d-4276-ba1b-87e9699c9a98-xtables-lock\") pod \"kube-proxy-wqqmg\" (UID: \"f49d75f5-627d-4276-ba1b-87e9699c9a98\") " pod="kube-system/kube-proxy-wqqmg" Oct 8 20:00:57.236300 kubelet[2461]: I1008 20:00:57.236060 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-run\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236300 kubelet[2461]: I1008 20:00:57.236077 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hostproc\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236300 kubelet[2461]: I1008 20:00:57.236092 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-lib-modules\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236537 kubelet[2461]: I1008 20:00:57.236430 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49d75f5-627d-4276-ba1b-87e9699c9a98-lib-modules\") pod \"kube-proxy-wqqmg\" (UID: \"f49d75f5-627d-4276-ba1b-87e9699c9a98\") " pod="kube-system/kube-proxy-wqqmg" Oct 8 20:00:57.236537 kubelet[2461]: I1008 20:00:57.236459 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-config-path\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236537 kubelet[2461]: I1008 20:00:57.236474 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f49d75f5-627d-4276-ba1b-87e9699c9a98-kube-proxy\") pod \"kube-proxy-wqqmg\" (UID: \"f49d75f5-627d-4276-ba1b-87e9699c9a98\") " pod="kube-system/kube-proxy-wqqmg" Oct 8 20:00:57.236537 kubelet[2461]: I1008 20:00:57.236492 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-etc-cni-netd\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.236718 kubelet[2461]: I1008 20:00:57.236549 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szr6j\" (UniqueName: \"kubernetes.io/projected/f49d75f5-627d-4276-ba1b-87e9699c9a98-kube-api-access-szr6j\") pod \"kube-proxy-wqqmg\" (UID: \"f49d75f5-627d-4276-ba1b-87e9699c9a98\") " pod="kube-system/kube-proxy-wqqmg" Oct 8 20:00:57.236718 kubelet[2461]: I1008 20:00:57.236584 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-net\") pod \"cilium-tv4zm\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " pod="kube-system/cilium-tv4zm" Oct 8 20:00:57.346090 kubelet[2461]: E1008 20:00:57.346050 2461 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:00:57.346090 kubelet[2461]: E1008 20:00:57.346081 2461 projected.go:194] Error preparing data for projected volume kube-api-access-szr6j for pod kube-system/kube-proxy-wqqmg: configmap "kube-root-ca.crt" not found Oct 8 20:00:57.346215 kubelet[2461]: E1008 20:00:57.346128 2461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f49d75f5-627d-4276-ba1b-87e9699c9a98-kube-api-access-szr6j podName:f49d75f5-627d-4276-ba1b-87e9699c9a98 nodeName:}" failed. No retries permitted until 2024-10-08 20:00:57.846109659 +0000 UTC m=+7.330187954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-szr6j" (UniqueName: "kubernetes.io/projected/f49d75f5-627d-4276-ba1b-87e9699c9a98-kube-api-access-szr6j") pod "kube-proxy-wqqmg" (UID: "f49d75f5-627d-4276-ba1b-87e9699c9a98") : configmap "kube-root-ca.crt" not found Oct 8 20:00:57.348363 kubelet[2461]: E1008 20:00:57.348339 2461 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:00:57.348445 kubelet[2461]: E1008 20:00:57.348365 2461 projected.go:194] Error preparing data for projected volume kube-api-access-mjg7r for pod kube-system/cilium-tv4zm: configmap "kube-root-ca.crt" not found Oct 8 20:00:57.348445 kubelet[2461]: E1008 20:00:57.348405 2461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-kube-api-access-mjg7r podName:3e6bb1ad-171a-406a-844e-20a50f1c74c3 nodeName:}" failed. No retries permitted until 2024-10-08 20:00:57.848393681 +0000 UTC m=+7.332472016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mjg7r" (UniqueName: "kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-kube-api-access-mjg7r") pod "cilium-tv4zm" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3") : configmap "kube-root-ca.crt" not found Oct 8 20:00:57.555566 systemd[1]: Created slice kubepods-besteffort-pod6386092f_c2a4_4ef1_a950_5152151491f5.slice - libcontainer container kubepods-besteffort-pod6386092f_c2a4_4ef1_a950_5152151491f5.slice. Oct 8 20:00:57.639650 kubelet[2461]: I1008 20:00:57.639576 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzn8\" (UniqueName: \"kubernetes.io/projected/6386092f-c2a4-4ef1-a950-5152151491f5-kube-api-access-hjzn8\") pod \"cilium-operator-5d85765b45-vvr5q\" (UID: \"6386092f-c2a4-4ef1-a950-5152151491f5\") " pod="kube-system/cilium-operator-5d85765b45-vvr5q" Oct 8 20:00:57.639650 kubelet[2461]: I1008 20:00:57.639621 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6386092f-c2a4-4ef1-a950-5152151491f5-cilium-config-path\") pod \"cilium-operator-5d85765b45-vvr5q\" (UID: \"6386092f-c2a4-4ef1-a950-5152151491f5\") " pod="kube-system/cilium-operator-5d85765b45-vvr5q" Oct 8 20:00:57.858894 kubelet[2461]: E1008 20:00:57.858800 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:57.859298 containerd[1444]: time="2024-10-08T20:00:57.859261004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vvr5q,Uid:6386092f-c2a4-4ef1-a950-5152151491f5,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:57.878131 containerd[1444]: time="2024-10-08T20:00:57.877716537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:57.878131 containerd[1444]: time="2024-10-08T20:00:57.878081408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:57.878131 containerd[1444]: time="2024-10-08T20:00:57.878094408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:57.878392 containerd[1444]: time="2024-10-08T20:00:57.878169926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:57.893837 systemd[1]: Started cri-containerd-1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5.scope - libcontainer container 1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5. Oct 8 20:00:57.917386 containerd[1444]: time="2024-10-08T20:00:57.917352575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vvr5q,Uid:6386092f-c2a4-4ef1-a950-5152151491f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5\"" Oct 8 20:00:57.918056 kubelet[2461]: E1008 20:00:57.918034 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:57.920284 containerd[1444]: time="2024-10-08T20:00:57.920247302Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 20:00:58.098070 kubelet[2461]: E1008 20:00:58.098043 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:58.098771 containerd[1444]: time="2024-10-08T20:00:58.098736964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqqmg,Uid:f49d75f5-627d-4276-ba1b-87e9699c9a98,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:58.106435 kubelet[2461]: E1008 20:00:58.106409 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:58.106839 containerd[1444]: time="2024-10-08T20:00:58.106723213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tv4zm,Uid:3e6bb1ad-171a-406a-844e-20a50f1c74c3,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:58.115885 containerd[1444]: time="2024-10-08T20:00:58.115335327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:58.115885 containerd[1444]: time="2024-10-08T20:00:58.115786796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:58.116025 containerd[1444]: time="2024-10-08T20:00:58.115799996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:58.116025 containerd[1444]: time="2024-10-08T20:00:58.115877674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:58.128824 containerd[1444]: time="2024-10-08T20:00:58.128363255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:58.128824 containerd[1444]: time="2024-10-08T20:00:58.128816364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:58.128965 containerd[1444]: time="2024-10-08T20:00:58.128832364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:58.128965 containerd[1444]: time="2024-10-08T20:00:58.128928202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:58.134851 systemd[1]: Started cri-containerd-707a6adcde901d11fa9328f2a436498d2063126b54210b013db65fe90e796e89.scope - libcontainer container 707a6adcde901d11fa9328f2a436498d2063126b54210b013db65fe90e796e89. Oct 8 20:00:58.140460 systemd[1]: Started cri-containerd-a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600.scope - libcontainer container a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600. Oct 8 20:00:58.159977 containerd[1444]: time="2024-10-08T20:00:58.159942900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqqmg,Uid:f49d75f5-627d-4276-ba1b-87e9699c9a98,Namespace:kube-system,Attempt:0,} returns sandbox id \"707a6adcde901d11fa9328f2a436498d2063126b54210b013db65fe90e796e89\"" Oct 8 20:00:58.160597 kubelet[2461]: E1008 20:00:58.160574 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:58.164126 containerd[1444]: time="2024-10-08T20:00:58.164095081Z" level=info msg="CreateContainer within sandbox \"707a6adcde901d11fa9328f2a436498d2063126b54210b013db65fe90e796e89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:00:58.165797 containerd[1444]: time="2024-10-08T20:00:58.165769441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tv4zm,Uid:3e6bb1ad-171a-406a-844e-20a50f1c74c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\"" Oct 8 20:00:58.166545 kubelet[2461]: E1008 20:00:58.166527 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:58.189803 containerd[1444]: time="2024-10-08T20:00:58.189755427Z" level=info msg="CreateContainer within sandbox \"707a6adcde901d11fa9328f2a436498d2063126b54210b013db65fe90e796e89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce6bca32b5fcb2c7046313e793e765fbeb82fe5bb4b9db226cc28fdcd16c9e8a\"" Oct 8 20:00:58.190851 containerd[1444]: time="2024-10-08T20:00:58.190246975Z" level=info msg="StartContainer for \"ce6bca32b5fcb2c7046313e793e765fbeb82fe5bb4b9db226cc28fdcd16c9e8a\"" Oct 8 20:00:58.224855 systemd[1]: Started cri-containerd-ce6bca32b5fcb2c7046313e793e765fbeb82fe5bb4b9db226cc28fdcd16c9e8a.scope - libcontainer container ce6bca32b5fcb2c7046313e793e765fbeb82fe5bb4b9db226cc28fdcd16c9e8a. Oct 8 20:00:58.245268 containerd[1444]: time="2024-10-08T20:00:58.245217381Z" level=info msg="StartContainer for \"ce6bca32b5fcb2c7046313e793e765fbeb82fe5bb4b9db226cc28fdcd16c9e8a\" returns successfully" Oct 8 20:00:58.641349 kubelet[2461]: E1008 20:00:58.641280 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:00:59.329713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263141695.mount: Deactivated successfully. Oct 8 20:00:59.738807 containerd[1444]: time="2024-10-08T20:00:59.738764056Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:59.739580 containerd[1444]: time="2024-10-08T20:00:59.739545399Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138338" Oct 8 20:00:59.740304 containerd[1444]: time="2024-10-08T20:00:59.740244863Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:59.742155 containerd[1444]: time="2024-10-08T20:00:59.742067342Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.821783481s" Oct 8 20:00:59.742155 containerd[1444]: time="2024-10-08T20:00:59.742103741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 8 20:00:59.743449 containerd[1444]: time="2024-10-08T20:00:59.743248635Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 20:00:59.744094 containerd[1444]: time="2024-10-08T20:00:59.744069296Z" level=info msg="CreateContainer within sandbox \"1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 20:00:59.759216 containerd[1444]: time="2024-10-08T20:00:59.759174914Z" level=info msg="CreateContainer within sandbox \"1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\"" Oct 8 20:00:59.760338 containerd[1444]: time="2024-10-08T20:00:59.760311489Z" level=info msg="StartContainer for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\"" Oct 8 20:00:59.787854 systemd[1]: Started cri-containerd-ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7.scope - libcontainer container ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7. Oct 8 20:00:59.808017 containerd[1444]: time="2024-10-08T20:00:59.807890612Z" level=info msg="StartContainer for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" returns successfully" Oct 8 20:01:00.126055 kubelet[2461]: E1008 20:01:00.125959 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:00.135409 kubelet[2461]: I1008 20:01:00.134987 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wqqmg" podStartSLOduration=3.134971814 podStartE2EDuration="3.134971814s" podCreationTimestamp="2024-10-08 20:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:00:58.64925144 +0000 UTC m=+8.133329775" watchObservedRunningTime="2024-10-08 20:01:00.134971814 +0000 UTC m=+9.619050149" Oct 8 20:01:00.651500 kubelet[2461]: E1008 20:01:00.651318 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:00.651500 kubelet[2461]: E1008 20:01:00.651417 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:00.662645 kubelet[2461]: I1008 20:01:00.662530 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vvr5q" podStartSLOduration=1.839354706 podStartE2EDuration="3.662514634s" podCreationTimestamp="2024-10-08 20:00:57 +0000 UTC" firstStartedPulling="2024-10-08 20:00:57.919686676 +0000 UTC m=+7.403765011" lastFinishedPulling="2024-10-08 20:00:59.742846644 +0000 UTC m=+9.226924939" observedRunningTime="2024-10-08 20:01:00.661373218 +0000 UTC m=+10.145451553" watchObservedRunningTime="2024-10-08 20:01:00.662514634 +0000 UTC m=+10.146592969" Oct 8 20:01:00.778967 kubelet[2461]: E1008 20:01:00.778935 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:01.669304 kubelet[2461]: E1008 20:01:01.669204 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:01.670038 kubelet[2461]: E1008 20:01:01.669916 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:04.658599 kubelet[2461]: E1008 20:01:04.658567 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:05.868723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539528124.mount: Deactivated successfully. Oct 8 20:01:07.129585 update_engine[1431]: I20241008 20:01:07.129513 1431 update_attempter.cc:509] Updating boot flags... Oct 8 20:01:07.220561 containerd[1444]: time="2024-10-08T20:01:07.220112149Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:07.222760 containerd[1444]: time="2024-10-08T20:01:07.222072280Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651542" Oct 8 20:01:07.223720 containerd[1444]: time="2024-10-08T20:01:07.223544178Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:07.225434 containerd[1444]: time="2024-10-08T20:01:07.225364511Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.482080397s" Oct 8 20:01:07.228120 containerd[1444]: time="2024-10-08T20:01:07.225531629Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 8 20:01:07.236922 containerd[1444]: time="2024-10-08T20:01:07.236781062Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:01:07.240710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2927) Oct 8 20:01:07.272340 containerd[1444]: time="2024-10-08T20:01:07.272245055Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\"" Oct 8 20:01:07.272843 containerd[1444]: time="2024-10-08T20:01:07.272721928Z" level=info msg="StartContainer for \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\"" Oct 8 20:01:07.284826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2927) Oct 8 20:01:07.321840 systemd[1]: Started cri-containerd-ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8.scope - libcontainer container ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8. Oct 8 20:01:07.341243 containerd[1444]: time="2024-10-08T20:01:07.341135033Z" level=info msg="StartContainer for \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\" returns successfully" Oct 8 20:01:07.392540 systemd[1]: cri-containerd-ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8.scope: Deactivated successfully. Oct 8 20:01:07.451784 containerd[1444]: time="2024-10-08T20:01:07.448348761Z" level=info msg="shim disconnected" id=ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8 namespace=k8s.io Oct 8 20:01:07.451784 containerd[1444]: time="2024-10-08T20:01:07.451719351Z" level=warning msg="cleaning up after shim disconnected" id=ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8 namespace=k8s.io Oct 8 20:01:07.451784 containerd[1444]: time="2024-10-08T20:01:07.451734231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:07.667376 kubelet[2461]: E1008 20:01:07.666446 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:07.682220 containerd[1444]: time="2024-10-08T20:01:07.682168770Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:01:07.692701 containerd[1444]: time="2024-10-08T20:01:07.692141902Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\"" Oct 8 20:01:07.694074 containerd[1444]: time="2024-10-08T20:01:07.694033674Z" level=info msg="StartContainer for \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\"" Oct 8 20:01:07.718838 systemd[1]: Started cri-containerd-da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819.scope - libcontainer container da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819. Oct 8 20:01:07.737804 containerd[1444]: time="2024-10-08T20:01:07.737765905Z" level=info msg="StartContainer for \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\" returns successfully" Oct 8 20:01:07.750927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:01:07.751131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:01:07.751194 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:01:07.757980 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:01:07.758433 systemd[1]: cri-containerd-da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819.scope: Deactivated successfully. Oct 8 20:01:07.770012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:01:07.773202 containerd[1444]: time="2024-10-08T20:01:07.773142180Z" level=info msg="shim disconnected" id=da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819 namespace=k8s.io Oct 8 20:01:07.773202 containerd[1444]: time="2024-10-08T20:01:07.773199259Z" level=warning msg="cleaning up after shim disconnected" id=da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819 namespace=k8s.io Oct 8 20:01:07.773202 containerd[1444]: time="2024-10-08T20:01:07.773207979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:08.268164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8-rootfs.mount: Deactivated successfully. Oct 8 20:01:08.669829 kubelet[2461]: E1008 20:01:08.669581 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:08.673273 containerd[1444]: time="2024-10-08T20:01:08.673130663Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:01:08.707910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399790288.mount: Deactivated successfully. Oct 8 20:01:08.709892 containerd[1444]: time="2024-10-08T20:01:08.709858905Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\"" Oct 8 20:01:08.710440 containerd[1444]: time="2024-10-08T20:01:08.710394417Z" level=info msg="StartContainer for \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\"" Oct 8 20:01:08.740847 systemd[1]: Started cri-containerd-e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25.scope - libcontainer container e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25. Oct 8 20:01:08.760502 containerd[1444]: time="2024-10-08T20:01:08.760461510Z" level=info msg="StartContainer for \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\" returns successfully" Oct 8 20:01:08.780155 systemd[1]: cri-containerd-e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25.scope: Deactivated successfully. Oct 8 20:01:08.800449 containerd[1444]: time="2024-10-08T20:01:08.800395106Z" level=info msg="shim disconnected" id=e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25 namespace=k8s.io Oct 8 20:01:08.800591 containerd[1444]: time="2024-10-08T20:01:08.800447265Z" level=warning msg="cleaning up after shim disconnected" id=e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25 namespace=k8s.io Oct 8 20:01:08.800591 containerd[1444]: time="2024-10-08T20:01:08.800462025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:09.267894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25-rootfs.mount: Deactivated successfully. Oct 8 20:01:09.673360 kubelet[2461]: E1008 20:01:09.673002 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:09.675947 containerd[1444]: time="2024-10-08T20:01:09.675724157Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:01:09.693139 containerd[1444]: time="2024-10-08T20:01:09.693100563Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\"" Oct 8 20:01:09.693654 containerd[1444]: time="2024-10-08T20:01:09.693629796Z" level=info msg="StartContainer for \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\"" Oct 8 20:01:09.722848 systemd[1]: Started cri-containerd-6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051.scope - libcontainer container 6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051. Oct 8 20:01:09.744915 systemd[1]: cri-containerd-6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051.scope: Deactivated successfully. Oct 8 20:01:09.747079 containerd[1444]: time="2024-10-08T20:01:09.746765041Z" level=info msg="StartContainer for \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\" returns successfully" Oct 8 20:01:09.768735 containerd[1444]: time="2024-10-08T20:01:09.768628467Z" level=info msg="shim disconnected" id=6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051 namespace=k8s.io Oct 8 20:01:09.768735 containerd[1444]: time="2024-10-08T20:01:09.768701106Z" level=warning msg="cleaning up after shim disconnected" id=6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051 namespace=k8s.io Oct 8 20:01:09.768735 containerd[1444]: time="2024-10-08T20:01:09.768712306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:10.267900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051-rootfs.mount: Deactivated successfully. Oct 8 20:01:10.678149 kubelet[2461]: E1008 20:01:10.677978 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.685939 containerd[1444]: time="2024-10-08T20:01:10.685889083Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:01:10.725800 containerd[1444]: time="2024-10-08T20:01:10.725746292Z" level=info msg="CreateContainer within sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\"" Oct 8 20:01:10.726432 containerd[1444]: time="2024-10-08T20:01:10.726387044Z" level=info msg="StartContainer for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\"" Oct 8 20:01:10.751860 systemd[1]: Started cri-containerd-a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4.scope - libcontainer container a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4. Oct 8 20:01:10.774808 containerd[1444]: time="2024-10-08T20:01:10.774770784Z" level=info msg="StartContainer for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" returns successfully" Oct 8 20:01:10.955196 kubelet[2461]: I1008 20:01:10.955154 2461 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 20:01:11.015158 systemd[1]: Created slice kubepods-burstable-podf4eec5b4_f455_47a0_92e7_c53918a79879.slice - libcontainer container kubepods-burstable-podf4eec5b4_f455_47a0_92e7_c53918a79879.slice. Oct 8 20:01:11.023942 systemd[1]: Created slice kubepods-burstable-pod19a60031_5756_4748_8c39_e41521833f3c.slice - libcontainer container kubepods-burstable-pod19a60031_5756_4748_8c39_e41521833f3c.slice. Oct 8 20:01:11.043217 kubelet[2461]: I1008 20:01:11.043024 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdztp\" (UniqueName: \"kubernetes.io/projected/f4eec5b4-f455-47a0-92e7-c53918a79879-kube-api-access-xdztp\") pod \"coredns-6f6b679f8f-nwhlc\" (UID: \"f4eec5b4-f455-47a0-92e7-c53918a79879\") " pod="kube-system/coredns-6f6b679f8f-nwhlc" Oct 8 20:01:11.043217 kubelet[2461]: I1008 20:01:11.043070 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a60031-5756-4748-8c39-e41521833f3c-config-volume\") pod \"coredns-6f6b679f8f-w9b99\" (UID: \"19a60031-5756-4748-8c39-e41521833f3c\") " pod="kube-system/coredns-6f6b679f8f-w9b99" Oct 8 20:01:11.043217 kubelet[2461]: I1008 20:01:11.043092 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4eec5b4-f455-47a0-92e7-c53918a79879-config-volume\") pod \"coredns-6f6b679f8f-nwhlc\" (UID: \"f4eec5b4-f455-47a0-92e7-c53918a79879\") " pod="kube-system/coredns-6f6b679f8f-nwhlc" Oct 8 20:01:11.043217 kubelet[2461]: I1008 20:01:11.043108 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs7zj\" (UniqueName: \"kubernetes.io/projected/19a60031-5756-4748-8c39-e41521833f3c-kube-api-access-qs7zj\") pod \"coredns-6f6b679f8f-w9b99\" (UID: \"19a60031-5756-4748-8c39-e41521833f3c\") " pod="kube-system/coredns-6f6b679f8f-w9b99" Oct 8 20:01:11.321130 kubelet[2461]: E1008 20:01:11.321034 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:11.322162 containerd[1444]: time="2024-10-08T20:01:11.322106077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nwhlc,Uid:f4eec5b4-f455-47a0-92e7-c53918a79879,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:11.330200 kubelet[2461]: E1008 20:01:11.329675 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:11.330918 containerd[1444]: time="2024-10-08T20:01:11.330434415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w9b99,Uid:19a60031-5756-4748-8c39-e41521833f3c,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:11.682184 kubelet[2461]: E1008 20:01:11.681780 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:12.683069 kubelet[2461]: E1008 20:01:12.683042 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:12.978797 systemd-networkd[1372]: cilium_host: Link UP Oct 8 20:01:12.978929 systemd-networkd[1372]: cilium_net: Link UP Oct 8 20:01:12.979057 systemd-networkd[1372]: cilium_net: Gained carrier Oct 8 20:01:12.979207 systemd-networkd[1372]: cilium_host: Gained carrier Oct 8 20:01:13.057316 systemd-networkd[1372]: cilium_vxlan: Link UP Oct 8 20:01:13.057326 systemd-networkd[1372]: cilium_vxlan: Gained carrier Oct 8 20:01:13.323726 kernel: NET: Registered PF_ALG protocol family Oct 8 20:01:13.408800 systemd-networkd[1372]: cilium_host: Gained IPv6LL Oct 8 20:01:13.687186 kubelet[2461]: E1008 20:01:13.687052 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:13.864820 systemd-networkd[1372]: cilium_net: Gained IPv6LL Oct 8 20:01:13.878489 systemd-networkd[1372]: lxc_health: Link UP Oct 8 20:01:13.885033 systemd-networkd[1372]: lxc_health: Gained carrier Oct 8 20:01:14.131067 kubelet[2461]: I1008 20:01:14.130960 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tv4zm" podStartSLOduration=8.067854548 podStartE2EDuration="17.130941696s" podCreationTimestamp="2024-10-08 20:00:57 +0000 UTC" firstStartedPulling="2024-10-08 20:00:58.166855975 +0000 UTC m=+7.650934310" lastFinishedPulling="2024-10-08 20:01:07.229943123 +0000 UTC m=+16.714021458" observedRunningTime="2024-10-08 20:01:11.696092305 +0000 UTC m=+21.180170680" watchObservedRunningTime="2024-10-08 20:01:14.130941696 +0000 UTC m=+23.615020031" Oct 8 20:01:14.447254 systemd-networkd[1372]: lxc72d3c9e3c42e: Link UP Oct 8 20:01:14.447377 systemd-networkd[1372]: lxc3a505ec49f0b: Link UP Oct 8 20:01:14.468720 kernel: eth0: renamed from tmpcb5c5 Oct 8 20:01:14.484926 kernel: eth0: renamed from tmp841e9 Oct 8 20:01:14.488942 systemd-networkd[1372]: lxc3a505ec49f0b: Gained carrier Oct 8 20:01:14.490255 systemd-networkd[1372]: lxc72d3c9e3c42e: Gained carrier Oct 8 20:01:14.689031 kubelet[2461]: E1008 20:01:14.688989 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:15.016869 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Oct 8 20:01:15.400848 systemd-networkd[1372]: lxc_health: Gained IPv6LL Oct 8 20:01:15.690749 kubelet[2461]: E1008 20:01:15.690526 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:16.232823 systemd-networkd[1372]: lxc72d3c9e3c42e: Gained IPv6LL Oct 8 20:01:16.296943 systemd-networkd[1372]: lxc3a505ec49f0b: Gained IPv6LL Oct 8 20:01:16.691654 kubelet[2461]: E1008 20:01:16.691527 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:17.993360 containerd[1444]: time="2024-10-08T20:01:17.993097033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:17.993360 containerd[1444]: time="2024-10-08T20:01:17.993170993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:17.993360 containerd[1444]: time="2024-10-08T20:01:17.993186073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:17.993360 containerd[1444]: time="2024-10-08T20:01:17.993266312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:18.019352 containerd[1444]: time="2024-10-08T20:01:18.019239395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:18.019352 containerd[1444]: time="2024-10-08T20:01:18.019313234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:18.019352 containerd[1444]: time="2024-10-08T20:01:18.019327874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:18.019552 containerd[1444]: time="2024-10-08T20:01:18.019413634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:18.019851 systemd[1]: Started cri-containerd-841e97a7f3b0612413b99b0606a1493a50987a813681fef68365ada72a8f5b82.scope - libcontainer container 841e97a7f3b0612413b99b0606a1493a50987a813681fef68365ada72a8f5b82. Oct 8 20:01:18.035905 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:01:18.043876 systemd[1]: Started cri-containerd-cb5c5aa8f8b914dc97efc4ea59dcebf2ec3c406b8d369e87335949e6e5fcffc6.scope - libcontainer container cb5c5aa8f8b914dc97efc4ea59dcebf2ec3c406b8d369e87335949e6e5fcffc6. Oct 8 20:01:18.053222 containerd[1444]: time="2024-10-08T20:01:18.052935292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nwhlc,Uid:f4eec5b4-f455-47a0-92e7-c53918a79879,Namespace:kube-system,Attempt:0,} returns sandbox id \"841e97a7f3b0612413b99b0606a1493a50987a813681fef68365ada72a8f5b82\"" Oct 8 20:01:18.055223 kubelet[2461]: E1008 20:01:18.054759 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.058034 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:01:18.058197 containerd[1444]: time="2024-10-08T20:01:18.058034006Z" level=info msg="CreateContainer within sandbox \"841e97a7f3b0612413b99b0606a1493a50987a813681fef68365ada72a8f5b82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:01:18.077223 containerd[1444]: time="2024-10-08T20:01:18.077175274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w9b99,Uid:19a60031-5756-4748-8c39-e41521833f3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb5c5aa8f8b914dc97efc4ea59dcebf2ec3c406b8d369e87335949e6e5fcffc6\"" Oct 8 20:01:18.078038 containerd[1444]: time="2024-10-08T20:01:18.078007947Z" level=info msg="CreateContainer within sandbox \"841e97a7f3b0612413b99b0606a1493a50987a813681fef68365ada72a8f5b82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a69b7f25f68220a5acda96623498645fe05854e3aaab8e30c8fc3f004fdbe09\"" Oct 8 20:01:18.078097 kubelet[2461]: E1008 20:01:18.078032 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.078715 containerd[1444]: time="2024-10-08T20:01:18.078508262Z" level=info msg="StartContainer for \"3a69b7f25f68220a5acda96623498645fe05854e3aaab8e30c8fc3f004fdbe09\"" Oct 8 20:01:18.080624 containerd[1444]: time="2024-10-08T20:01:18.080403765Z" level=info msg="CreateContainer within sandbox \"cb5c5aa8f8b914dc97efc4ea59dcebf2ec3c406b8d369e87335949e6e5fcffc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:01:18.092982 containerd[1444]: time="2024-10-08T20:01:18.092873973Z" level=info msg="CreateContainer within sandbox \"cb5c5aa8f8b914dc97efc4ea59dcebf2ec3c406b8d369e87335949e6e5fcffc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62dc8d6eface88217a02de51795030f7626b69bd7933469b4fefc79af58a7804\"" Oct 8 20:01:18.093749 containerd[1444]: time="2024-10-08T20:01:18.093636726Z" level=info msg="StartContainer for \"62dc8d6eface88217a02de51795030f7626b69bd7933469b4fefc79af58a7804\"" Oct 8 20:01:18.109859 systemd[1]: Started cri-containerd-3a69b7f25f68220a5acda96623498645fe05854e3aaab8e30c8fc3f004fdbe09.scope - libcontainer container 3a69b7f25f68220a5acda96623498645fe05854e3aaab8e30c8fc3f004fdbe09. Oct 8 20:01:18.114241 systemd[1]: Started cri-containerd-62dc8d6eface88217a02de51795030f7626b69bd7933469b4fefc79af58a7804.scope - libcontainer container 62dc8d6eface88217a02de51795030f7626b69bd7933469b4fefc79af58a7804. Oct 8 20:01:18.142521 containerd[1444]: time="2024-10-08T20:01:18.142470327Z" level=info msg="StartContainer for \"3a69b7f25f68220a5acda96623498645fe05854e3aaab8e30c8fc3f004fdbe09\" returns successfully" Oct 8 20:01:18.142633 containerd[1444]: time="2024-10-08T20:01:18.142568046Z" level=info msg="StartContainer for \"62dc8d6eface88217a02de51795030f7626b69bd7933469b4fefc79af58a7804\" returns successfully" Oct 8 20:01:18.662385 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:48900.service - OpenSSH per-connection server daemon (10.0.0.1:48900). Oct 8 20:01:18.697491 kubelet[2461]: E1008 20:01:18.697263 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.702127 kubelet[2461]: E1008 20:01:18.702085 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.705465 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 48900 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:18.708308 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:18.712810 systemd-logind[1425]: New session 8 of user core. Oct 8 20:01:18.720847 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:01:18.728230 kubelet[2461]: I1008 20:01:18.725389 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nwhlc" podStartSLOduration=21.725372046 podStartE2EDuration="21.725372046s" podCreationTimestamp="2024-10-08 20:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:18.71494626 +0000 UTC m=+28.199024595" watchObservedRunningTime="2024-10-08 20:01:18.725372046 +0000 UTC m=+28.209450381" Oct 8 20:01:18.852122 sshd[3871]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:18.855555 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:48900.service: Deactivated successfully. Oct 8 20:01:18.857368 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:01:18.857981 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:01:18.858830 systemd-logind[1425]: Removed session 8. Oct 8 20:01:18.998693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872443568.mount: Deactivated successfully. Oct 8 20:01:19.703603 kubelet[2461]: E1008 20:01:19.703562 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:19.703945 kubelet[2461]: E1008 20:01:19.703662 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:20.704869 kubelet[2461]: E1008 20:01:20.704826 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:20.705260 kubelet[2461]: E1008 20:01:20.704907 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:23.863729 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:60524.service - OpenSSH per-connection server daemon (10.0.0.1:60524). Oct 8 20:01:23.918666 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 60524 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:23.920340 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:23.924258 systemd-logind[1425]: New session 9 of user core. Oct 8 20:01:23.941877 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:01:24.059966 sshd[3897]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:24.064222 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:60524.service: Deactivated successfully. Oct 8 20:01:24.066173 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:01:24.066929 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:01:24.068188 systemd-logind[1425]: Removed session 9. Oct 8 20:01:29.069177 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:60532.service - OpenSSH per-connection server daemon (10.0.0.1:60532). Oct 8 20:01:29.110403 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 60532 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:29.111541 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:29.116232 systemd-logind[1425]: New session 10 of user core. Oct 8 20:01:29.121928 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:01:29.230795 sshd[3914]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:29.241146 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:60532.service: Deactivated successfully. Oct 8 20:01:29.242888 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:01:29.244184 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:01:29.253105 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). Oct 8 20:01:29.254177 systemd-logind[1425]: Removed session 10. Oct 8 20:01:29.288060 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:29.289166 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:29.292481 systemd-logind[1425]: New session 11 of user core. Oct 8 20:01:29.298808 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:01:29.437969 sshd[3930]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:29.449539 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:60538.service: Deactivated successfully. Oct 8 20:01:29.453540 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:01:29.456273 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:01:29.463040 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). Oct 8 20:01:29.464991 systemd-logind[1425]: Removed session 11. Oct 8 20:01:29.498139 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:29.499249 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:29.502915 systemd-logind[1425]: New session 12 of user core. Oct 8 20:01:29.510827 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:01:29.617797 sshd[3943]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:29.621019 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:60554.service: Deactivated successfully. Oct 8 20:01:29.623189 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:01:29.623718 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:01:29.624667 systemd-logind[1425]: Removed session 12. Oct 8 20:01:34.631198 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:38758.service - OpenSSH per-connection server daemon (10.0.0.1:38758). Oct 8 20:01:34.669896 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 38758 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:34.671167 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:34.674444 systemd-logind[1425]: New session 13 of user core. Oct 8 20:01:34.679909 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:01:34.786467 sshd[3958]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:34.789588 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:38758.service: Deactivated successfully. Oct 8 20:01:34.791226 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:01:34.793150 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:01:34.794084 systemd-logind[1425]: Removed session 13. Oct 8 20:01:39.800844 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:38762.service - OpenSSH per-connection server daemon (10.0.0.1:38762). Oct 8 20:01:39.845209 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 38762 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:39.846440 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:39.851710 systemd-logind[1425]: New session 14 of user core. Oct 8 20:01:39.860918 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:01:39.983390 sshd[3972]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:39.994235 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:38762.service: Deactivated successfully. Oct 8 20:01:39.996420 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:01:39.998891 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:01:40.006977 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:38776.service - OpenSSH per-connection server daemon (10.0.0.1:38776). Oct 8 20:01:40.008760 systemd-logind[1425]: Removed session 14. Oct 8 20:01:40.043895 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 38776 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:40.045387 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:40.053579 systemd-logind[1425]: New session 15 of user core. Oct 8 20:01:40.056002 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:01:40.359284 sshd[3987]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:40.377253 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:38776.service: Deactivated successfully. Oct 8 20:01:40.378668 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:01:40.380091 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:01:40.381434 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:38792.service - OpenSSH per-connection server daemon (10.0.0.1:38792). Oct 8 20:01:40.382353 systemd-logind[1425]: Removed session 15. Oct 8 20:01:40.430801 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 38792 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:40.432599 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:40.437633 systemd-logind[1425]: New session 16 of user core. Oct 8 20:01:40.445833 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:01:41.742611 sshd[4000]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:41.749567 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:38792.service: Deactivated successfully. Oct 8 20:01:41.752167 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:01:41.754958 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:01:41.764469 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:38808.service - OpenSSH per-connection server daemon (10.0.0.1:38808). Oct 8 20:01:41.765908 systemd-logind[1425]: Removed session 16. Oct 8 20:01:41.804195 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 38808 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:41.805477 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:41.809374 systemd-logind[1425]: New session 17 of user core. Oct 8 20:01:41.823868 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:01:42.045050 sshd[4021]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:42.057091 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:38808.service: Deactivated successfully. Oct 8 20:01:42.059078 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:01:42.061037 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:01:42.069990 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Oct 8 20:01:42.071070 systemd-logind[1425]: Removed session 17. Oct 8 20:01:42.105491 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:42.106980 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:42.111021 systemd-logind[1425]: New session 18 of user core. Oct 8 20:01:42.117837 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:01:42.223804 sshd[4034]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:42.227098 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:38812.service: Deactivated successfully. Oct 8 20:01:42.229116 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:01:42.229844 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:01:42.230723 systemd-logind[1425]: Removed session 18. Oct 8 20:01:47.234572 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:57604.service - OpenSSH per-connection server daemon (10.0.0.1:57604). Oct 8 20:01:47.273506 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 57604 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:47.274835 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:47.278318 systemd-logind[1425]: New session 19 of user core. Oct 8 20:01:47.284837 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:01:47.391749 sshd[4049]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:47.395779 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:57604.service: Deactivated successfully. Oct 8 20:01:47.397437 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:01:47.398853 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:01:47.399709 systemd-logind[1425]: Removed session 19. Oct 8 20:01:52.426543 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:57606.service - OpenSSH per-connection server daemon (10.0.0.1:57606). Oct 8 20:01:52.464308 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 57606 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:52.465809 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:52.469738 systemd-logind[1425]: New session 20 of user core. Oct 8 20:01:52.483884 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:01:52.593934 sshd[4069]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:52.598967 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:57606.service: Deactivated successfully. Oct 8 20:01:52.601648 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:01:52.602899 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:01:52.604039 systemd-logind[1425]: Removed session 20. Oct 8 20:01:57.603332 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:56920.service - OpenSSH per-connection server daemon (10.0.0.1:56920). Oct 8 20:01:57.642164 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 56920 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:01:57.643541 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:57.647067 systemd-logind[1425]: New session 21 of user core. Oct 8 20:01:57.658851 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:01:57.762976 sshd[4085]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:57.766075 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:56920.service: Deactivated successfully. Oct 8 20:01:57.769076 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:01:57.769610 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:01:57.770416 systemd-logind[1425]: Removed session 21. Oct 8 20:02:02.619943 kubelet[2461]: E1008 20:02:02.619823 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:02.781365 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). Oct 8 20:02:02.820554 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:02.821854 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:02.825577 systemd-logind[1425]: New session 22 of user core. Oct 8 20:02:02.839917 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:02:02.942697 sshd[4101]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:02.954142 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:57632.service: Deactivated successfully. Oct 8 20:02:02.956202 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:02:02.957468 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:02:02.967088 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:57634.service - OpenSSH per-connection server daemon (10.0.0.1:57634). Oct 8 20:02:02.967995 systemd-logind[1425]: Removed session 22. Oct 8 20:02:03.002058 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 57634 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:03.003349 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:03.006986 systemd-logind[1425]: New session 23 of user core. Oct 8 20:02:03.011867 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:02:04.619265 kubelet[2461]: E1008 20:02:04.619215 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:05.403991 kubelet[2461]: I1008 20:02:05.403908 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-w9b99" podStartSLOduration=68.403887187 podStartE2EDuration="1m8.403887187s" podCreationTimestamp="2024-10-08 20:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:18.740003514 +0000 UTC m=+28.224081889" watchObservedRunningTime="2024-10-08 20:02:05.403887187 +0000 UTC m=+74.887965522" Oct 8 20:02:05.409092 containerd[1444]: time="2024-10-08T20:02:05.409047430Z" level=info msg="StopContainer for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" with timeout 30 (s)" Oct 8 20:02:05.409847 containerd[1444]: time="2024-10-08T20:02:05.409814763Z" level=info msg="Stop container \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" with signal terminated" Oct 8 20:02:05.419165 systemd[1]: cri-containerd-ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7.scope: Deactivated successfully. Oct 8 20:02:05.442796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7-rootfs.mount: Deactivated successfully. Oct 8 20:02:05.447396 containerd[1444]: time="2024-10-08T20:02:05.447329486Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:02:05.447539 containerd[1444]: time="2024-10-08T20:02:05.447438847Z" level=info msg="StopContainer for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" with timeout 2 (s)" Oct 8 20:02:05.447900 containerd[1444]: time="2024-10-08T20:02:05.447873774Z" level=info msg="Stop container \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" with signal terminated" Oct 8 20:02:05.453475 systemd-networkd[1372]: lxc_health: Link DOWN Oct 8 20:02:05.453487 systemd-networkd[1372]: lxc_health: Lost carrier Oct 8 20:02:05.459011 containerd[1444]: time="2024-10-08T20:02:05.458952952Z" level=info msg="shim disconnected" id=ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7 namespace=k8s.io Oct 8 20:02:05.459011 containerd[1444]: time="2024-10-08T20:02:05.459006873Z" level=warning msg="cleaning up after shim disconnected" id=ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7 namespace=k8s.io Oct 8 20:02:05.459163 containerd[1444]: time="2024-10-08T20:02:05.459017553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:05.485122 systemd[1]: cri-containerd-a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4.scope: Deactivated successfully. Oct 8 20:02:05.485419 systemd[1]: cri-containerd-a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4.scope: Consumed 6.410s CPU time. Oct 8 20:02:05.503072 containerd[1444]: time="2024-10-08T20:02:05.502275449Z" level=info msg="StopContainer for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" returns successfully" Oct 8 20:02:05.521623 containerd[1444]: time="2024-10-08T20:02:05.519139520Z" level=info msg="StopPodSandbox for \"1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5\"" Oct 8 20:02:05.521623 containerd[1444]: time="2024-10-08T20:02:05.519332443Z" level=info msg="Container to stop \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:05.520279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4-rootfs.mount: Deactivated successfully. Oct 8 20:02:05.522162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5-shm.mount: Deactivated successfully. Oct 8 20:02:05.527520 containerd[1444]: time="2024-10-08T20:02:05.527457613Z" level=info msg="shim disconnected" id=a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4 namespace=k8s.io Oct 8 20:02:05.527520 containerd[1444]: time="2024-10-08T20:02:05.527507294Z" level=warning msg="cleaning up after shim disconnected" id=a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4 namespace=k8s.io Oct 8 20:02:05.527520 containerd[1444]: time="2024-10-08T20:02:05.527515534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:05.533530 systemd[1]: cri-containerd-1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5.scope: Deactivated successfully. Oct 8 20:02:05.546626 containerd[1444]: time="2024-10-08T20:02:05.546561680Z" level=info msg="StopContainer for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" returns successfully" Oct 8 20:02:05.548341 containerd[1444]: time="2024-10-08T20:02:05.547066248Z" level=info msg="StopPodSandbox for \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\"" Oct 8 20:02:05.548341 containerd[1444]: time="2024-10-08T20:02:05.547107889Z" level=info msg="Container to stop \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:05.548341 containerd[1444]: time="2024-10-08T20:02:05.547121569Z" level=info msg="Container to stop \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:05.548341 containerd[1444]: time="2024-10-08T20:02:05.547131209Z" level=info msg="Container to stop \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:05.548341 containerd[1444]: time="2024-10-08T20:02:05.547140890Z" level=info msg="Container to stop \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:05.548341 containerd[1444]: time="2024-10-08T20:02:05.547149730Z" level=info msg="Container to stop \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:05.549151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600-shm.mount: Deactivated successfully. Oct 8 20:02:05.555309 systemd[1]: cri-containerd-a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600.scope: Deactivated successfully. Oct 8 20:02:05.573161 containerd[1444]: time="2024-10-08T20:02:05.573014865Z" level=info msg="shim disconnected" id=1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5 namespace=k8s.io Oct 8 20:02:05.573161 containerd[1444]: time="2024-10-08T20:02:05.573084827Z" level=warning msg="cleaning up after shim disconnected" id=1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5 namespace=k8s.io Oct 8 20:02:05.573161 containerd[1444]: time="2024-10-08T20:02:05.573094187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:05.582914 containerd[1444]: time="2024-10-08T20:02:05.582840823Z" level=info msg="shim disconnected" id=a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600 namespace=k8s.io Oct 8 20:02:05.582914 containerd[1444]: time="2024-10-08T20:02:05.582889904Z" level=warning msg="cleaning up after shim disconnected" id=a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600 namespace=k8s.io Oct 8 20:02:05.582914 containerd[1444]: time="2024-10-08T20:02:05.582906624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:05.588200 containerd[1444]: time="2024-10-08T20:02:05.588158549Z" level=info msg="TearDown network for sandbox \"1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5\" successfully" Oct 8 20:02:05.588200 containerd[1444]: time="2024-10-08T20:02:05.588193749Z" level=info msg="StopPodSandbox for \"1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5\" returns successfully" Oct 8 20:02:05.595588 containerd[1444]: time="2024-10-08T20:02:05.595370705Z" level=info msg="TearDown network for sandbox \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" successfully" Oct 8 20:02:05.595588 containerd[1444]: time="2024-10-08T20:02:05.595406625Z" level=info msg="StopPodSandbox for \"a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600\" returns successfully" Oct 8 20:02:05.638492 kubelet[2461]: I1008 20:02:05.638431 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-kernel\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.638492 kubelet[2461]: I1008 20:02:05.638493 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-etc-cni-netd\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.638914 kubelet[2461]: I1008 20:02:05.638512 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cni-path\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.638914 kubelet[2461]: I1008 20:02:05.638553 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6386092f-c2a4-4ef1-a950-5152151491f5-cilium-config-path\") pod \"6386092f-c2a4-4ef1-a950-5152151491f5\" (UID: \"6386092f-c2a4-4ef1-a950-5152151491f5\") " Oct 8 20:02:05.638914 kubelet[2461]: I1008 20:02:05.638658 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hubble-tls\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.638914 kubelet[2461]: I1008 20:02:05.638760 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzn8\" (UniqueName: \"kubernetes.io/projected/6386092f-c2a4-4ef1-a950-5152151491f5-kube-api-access-hjzn8\") pod \"6386092f-c2a4-4ef1-a950-5152151491f5\" (UID: \"6386092f-c2a4-4ef1-a950-5152151491f5\") " Oct 8 20:02:05.638914 kubelet[2461]: I1008 20:02:05.638787 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjg7r\" (UniqueName: \"kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-kube-api-access-mjg7r\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.638914 kubelet[2461]: I1008 20:02:05.638804 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-run\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639067 kubelet[2461]: I1008 20:02:05.638837 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-lib-modules\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639067 kubelet[2461]: I1008 20:02:05.638855 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-xtables-lock\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639067 kubelet[2461]: I1008 20:02:05.638871 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hostproc\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639067 kubelet[2461]: I1008 20:02:05.638950 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e6bb1ad-171a-406a-844e-20a50f1c74c3-clustermesh-secrets\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639067 kubelet[2461]: I1008 20:02:05.638971 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-bpf-maps\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639296 kubelet[2461]: I1008 20:02:05.639251 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-net\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639296 kubelet[2461]: I1008 20:02:05.639289 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-config-path\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.639363 kubelet[2461]: I1008 20:02:05.639309 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-cgroup\") pod \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\" (UID: \"3e6bb1ad-171a-406a-844e-20a50f1c74c3\") " Oct 8 20:02:05.641868 kubelet[2461]: I1008 20:02:05.641836 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.641868 kubelet[2461]: I1008 20:02:05.641859 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.641977 kubelet[2461]: I1008 20:02:05.641836 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.641977 kubelet[2461]: I1008 20:02:05.641842 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.642340 kubelet[2461]: I1008 20:02:05.642226 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.642340 kubelet[2461]: I1008 20:02:05.642258 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.642340 kubelet[2461]: I1008 20:02:05.642274 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.643918 kubelet[2461]: I1008 20:02:05.643851 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6386092f-c2a4-4ef1-a950-5152151491f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6386092f-c2a4-4ef1-a950-5152151491f5" (UID: "6386092f-c2a4-4ef1-a950-5152151491f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:02:05.645127 kubelet[2461]: I1008 20:02:05.644955 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6386092f-c2a4-4ef1-a950-5152151491f5-kube-api-access-hjzn8" (OuterVolumeSpecName: "kube-api-access-hjzn8") pod "6386092f-c2a4-4ef1-a950-5152151491f5" (UID: "6386092f-c2a4-4ef1-a950-5152151491f5"). InnerVolumeSpecName "kube-api-access-hjzn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:02:05.645127 kubelet[2461]: I1008 20:02:05.644976 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:02:05.645127 kubelet[2461]: I1008 20:02:05.645014 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.645127 kubelet[2461]: I1008 20:02:05.645087 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.645127 kubelet[2461]: I1008 20:02:05.645104 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:05.646165 kubelet[2461]: I1008 20:02:05.645360 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6bb1ad-171a-406a-844e-20a50f1c74c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:02:05.646566 kubelet[2461]: I1008 20:02:05.646531 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-kube-api-access-mjg7r" (OuterVolumeSpecName: "kube-api-access-mjg7r") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "kube-api-access-mjg7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:02:05.647392 kubelet[2461]: I1008 20:02:05.647355 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e6bb1ad-171a-406a-844e-20a50f1c74c3" (UID: "3e6bb1ad-171a-406a-844e-20a50f1c74c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:02:05.661389 kubelet[2461]: E1008 20:02:05.661283 2461 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:02:05.740104 kubelet[2461]: I1008 20:02:05.740028 2461 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740104 kubelet[2461]: I1008 20:02:05.740078 2461 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740104 kubelet[2461]: I1008 20:02:05.740097 2461 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e6bb1ad-171a-406a-844e-20a50f1c74c3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740104 kubelet[2461]: I1008 20:02:05.740119 2461 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740134 2461 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740143 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740151 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740158 2461 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740166 2461 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740173 2461 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740181 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6386092f-c2a4-4ef1-a950-5152151491f5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740306 kubelet[2461]: I1008 20:02:05.740189 2461 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740471 kubelet[2461]: I1008 20:02:05.740196 2461 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hjzn8\" (UniqueName: \"kubernetes.io/projected/6386092f-c2a4-4ef1-a950-5152151491f5-kube-api-access-hjzn8\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740471 kubelet[2461]: I1008 20:02:05.740203 2461 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mjg7r\" (UniqueName: \"kubernetes.io/projected/3e6bb1ad-171a-406a-844e-20a50f1c74c3-kube-api-access-mjg7r\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740471 kubelet[2461]: I1008 20:02:05.740210 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.740471 kubelet[2461]: I1008 20:02:05.740219 2461 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e6bb1ad-171a-406a-844e-20a50f1c74c3-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:05.794960 kubelet[2461]: I1008 20:02:05.794908 2461 scope.go:117] "RemoveContainer" containerID="ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7" Oct 8 20:02:05.796188 containerd[1444]: time="2024-10-08T20:02:05.796144371Z" level=info msg="RemoveContainer for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\"" Oct 8 20:02:05.801194 containerd[1444]: time="2024-10-08T20:02:05.801156172Z" level=info msg="RemoveContainer for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" returns successfully" Oct 8 20:02:05.801636 kubelet[2461]: I1008 20:02:05.801553 2461 scope.go:117] "RemoveContainer" containerID="ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7" Oct 8 20:02:05.802449 containerd[1444]: time="2024-10-08T20:02:05.802304230Z" level=error msg="ContainerStatus for \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\": not found" Oct 8 20:02:05.806513 systemd[1]: Removed slice kubepods-besteffort-pod6386092f_c2a4_4ef1_a950_5152151491f5.slice - libcontainer container kubepods-besteffort-pod6386092f_c2a4_4ef1_a950_5152151491f5.slice. Oct 8 20:02:05.814905 kubelet[2461]: E1008 20:02:05.814774 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\": not found" containerID="ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7" Oct 8 20:02:05.814905 kubelet[2461]: I1008 20:02:05.814811 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7"} err="failed to get container status \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ebdc2ecedbfb2f5bf03c7a095ecd20a59addb55875df7e98c0cbb98b767619c7\": not found" Oct 8 20:02:05.815599 kubelet[2461]: I1008 20:02:05.815527 2461 scope.go:117] "RemoveContainer" containerID="a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4" Oct 8 20:02:05.820201 containerd[1444]: time="2024-10-08T20:02:05.819972314Z" level=info msg="RemoveContainer for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\"" Oct 8 20:02:05.820834 systemd[1]: Removed slice kubepods-burstable-pod3e6bb1ad_171a_406a_844e_20a50f1c74c3.slice - libcontainer container kubepods-burstable-pod3e6bb1ad_171a_406a_844e_20a50f1c74c3.slice. Oct 8 20:02:05.820964 systemd[1]: kubepods-burstable-pod3e6bb1ad_171a_406a_844e_20a50f1c74c3.slice: Consumed 6.538s CPU time. Oct 8 20:02:05.822973 containerd[1444]: time="2024-10-08T20:02:05.822933362Z" level=info msg="RemoveContainer for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" returns successfully" Oct 8 20:02:05.823152 kubelet[2461]: I1008 20:02:05.823133 2461 scope.go:117] "RemoveContainer" containerID="6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051" Oct 8 20:02:05.824538 containerd[1444]: time="2024-10-08T20:02:05.824481867Z" level=info msg="RemoveContainer for \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\"" Oct 8 20:02:05.828437 containerd[1444]: time="2024-10-08T20:02:05.828359209Z" level=info msg="RemoveContainer for \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\" returns successfully" Oct 8 20:02:05.828754 kubelet[2461]: I1008 20:02:05.828573 2461 scope.go:117] "RemoveContainer" containerID="e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25" Oct 8 20:02:05.830582 containerd[1444]: time="2024-10-08T20:02:05.829871873Z" level=info msg="RemoveContainer for \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\"" Oct 8 20:02:05.832543 containerd[1444]: time="2024-10-08T20:02:05.832487715Z" level=info msg="RemoveContainer for \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\" returns successfully" Oct 8 20:02:05.833040 kubelet[2461]: I1008 20:02:05.832814 2461 scope.go:117] "RemoveContainer" containerID="da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819" Oct 8 20:02:05.835494 containerd[1444]: time="2024-10-08T20:02:05.835158638Z" level=info msg="RemoveContainer for \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\"" Oct 8 20:02:05.838202 containerd[1444]: time="2024-10-08T20:02:05.838123966Z" level=info msg="RemoveContainer for \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\" returns successfully" Oct 8 20:02:05.838391 kubelet[2461]: I1008 20:02:05.838363 2461 scope.go:117] "RemoveContainer" containerID="ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8" Oct 8 20:02:05.839405 containerd[1444]: time="2024-10-08T20:02:05.839378826Z" level=info msg="RemoveContainer for \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\"" Oct 8 20:02:05.841887 containerd[1444]: time="2024-10-08T20:02:05.841850106Z" level=info msg="RemoveContainer for \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\" returns successfully" Oct 8 20:02:05.842059 kubelet[2461]: I1008 20:02:05.842017 2461 scope.go:117] "RemoveContainer" containerID="a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4" Oct 8 20:02:05.842216 containerd[1444]: time="2024-10-08T20:02:05.842184471Z" level=error msg="ContainerStatus for \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\": not found" Oct 8 20:02:05.842304 kubelet[2461]: E1008 20:02:05.842279 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\": not found" containerID="a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4" Oct 8 20:02:05.842338 kubelet[2461]: I1008 20:02:05.842309 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4"} err="failed to get container status \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8ecc8e646ed4db720a78ac739b2d83e187b36655355ce2b147851eb84e583d4\": not found" Oct 8 20:02:05.842338 kubelet[2461]: I1008 20:02:05.842326 2461 scope.go:117] "RemoveContainer" containerID="6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051" Oct 8 20:02:05.842600 containerd[1444]: time="2024-10-08T20:02:05.842480156Z" level=error msg="ContainerStatus for \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\": not found" Oct 8 20:02:05.842845 kubelet[2461]: E1008 20:02:05.842720 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\": not found" containerID="6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051" Oct 8 20:02:05.842845 kubelet[2461]: I1008 20:02:05.842749 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051"} err="failed to get container status \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c0c40e14e30a6a2fc74f55ead5af282922bec17dbb7818f14376a623ca35051\": not found" Oct 8 20:02:05.842845 kubelet[2461]: I1008 20:02:05.842767 2461 scope.go:117] "RemoveContainer" containerID="e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25" Oct 8 20:02:05.842977 containerd[1444]: time="2024-10-08T20:02:05.842921643Z" level=error msg="ContainerStatus for \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\": not found" Oct 8 20:02:05.843048 kubelet[2461]: E1008 20:02:05.843027 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\": not found" containerID="e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25" Oct 8 20:02:05.843083 kubelet[2461]: I1008 20:02:05.843050 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25"} err="failed to get container status \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2906496f63dd3117e71de4bc730a7aff03d43b3d45d7c6219c2943cebab8a25\": not found" Oct 8 20:02:05.843083 kubelet[2461]: I1008 20:02:05.843067 2461 scope.go:117] "RemoveContainer" containerID="da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819" Oct 8 20:02:05.843327 containerd[1444]: time="2024-10-08T20:02:05.843296489Z" level=error msg="ContainerStatus for \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\": not found" Oct 8 20:02:05.843417 kubelet[2461]: E1008 20:02:05.843399 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\": not found" containerID="da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819" Oct 8 20:02:05.843461 kubelet[2461]: I1008 20:02:05.843422 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819"} err="failed to get container status \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\": rpc error: code = NotFound desc = an error occurred when try to find container \"da0c512e88e56130a359617f6c932705aabad68c8038e6430ac1f2f9755c4819\": not found" Oct 8 20:02:05.843461 kubelet[2461]: I1008 20:02:05.843438 2461 scope.go:117] "RemoveContainer" containerID="ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8" Oct 8 20:02:05.843715 containerd[1444]: time="2024-10-08T20:02:05.843623814Z" level=error msg="ContainerStatus for \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\": not found" Oct 8 20:02:05.843787 kubelet[2461]: E1008 20:02:05.843764 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\": not found" containerID="ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8" Oct 8 20:02:05.843824 kubelet[2461]: I1008 20:02:05.843790 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8"} err="failed to get container status \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac6fc558c988c547ebb4cfa3a184484a4f34a8458f25a9bbcab2e3cb583ad4b8\": not found" Oct 8 20:02:06.423156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5d4775a1cb2be45e22295b6fb3dd1a11a172a8d3aea7fcb17c1ee78d8f41600-rootfs.mount: Deactivated successfully. Oct 8 20:02:06.423254 systemd[1]: var-lib-kubelet-pods-3e6bb1ad\x2d171a\x2d406a\x2d844e\x2d20a50f1c74c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmjg7r.mount: Deactivated successfully. Oct 8 20:02:06.423315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd45ecf40ce6bc2e593bd61516c0a729451653fd5bff9b30a5fc15772d900b5-rootfs.mount: Deactivated successfully. Oct 8 20:02:06.423369 systemd[1]: var-lib-kubelet-pods-6386092f\x2dc2a4\x2d4ef1\x2da950\x2d5152151491f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjzn8.mount: Deactivated successfully. Oct 8 20:02:06.423427 systemd[1]: var-lib-kubelet-pods-3e6bb1ad\x2d171a\x2d406a\x2d844e\x2d20a50f1c74c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 20:02:06.423481 systemd[1]: var-lib-kubelet-pods-3e6bb1ad\x2d171a\x2d406a\x2d844e\x2d20a50f1c74c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 20:02:06.622096 kubelet[2461]: I1008 20:02:06.622046 2461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" path="/var/lib/kubelet/pods/3e6bb1ad-171a-406a-844e-20a50f1c74c3/volumes" Oct 8 20:02:06.622627 kubelet[2461]: I1008 20:02:06.622579 2461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6386092f-c2a4-4ef1-a950-5152151491f5" path="/var/lib/kubelet/pods/6386092f-c2a4-4ef1-a950-5152151491f5/volumes" Oct 8 20:02:07.369888 sshd[4115]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:07.379057 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:57634.service: Deactivated successfully. Oct 8 20:02:07.380588 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:02:07.384026 systemd[1]: session-23.scope: Consumed 1.727s CPU time. Oct 8 20:02:07.385401 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:02:07.390033 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:57636.service - OpenSSH per-connection server daemon (10.0.0.1:57636). Oct 8 20:02:07.390781 systemd-logind[1425]: Removed session 23. Oct 8 20:02:07.428745 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 57636 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:07.429539 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:07.433671 systemd-logind[1425]: New session 24 of user core. Oct 8 20:02:07.442897 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:02:08.926733 sshd[4278]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:08.934054 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:57636.service: Deactivated successfully. Oct 8 20:02:08.940809 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:02:08.941537 systemd[1]: session-24.scope: Consumed 1.384s CPU time. Oct 8 20:02:08.945240 kubelet[2461]: E1008 20:02:08.945180 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" containerName="cilium-agent" Oct 8 20:02:08.945240 kubelet[2461]: E1008 20:02:08.945214 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6386092f-c2a4-4ef1-a950-5152151491f5" containerName="cilium-operator" Oct 8 20:02:08.945240 kubelet[2461]: E1008 20:02:08.945221 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" containerName="mount-cgroup" Oct 8 20:02:08.945240 kubelet[2461]: E1008 20:02:08.945227 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" containerName="apply-sysctl-overwrites" Oct 8 20:02:08.945240 kubelet[2461]: E1008 20:02:08.945233 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" containerName="mount-bpf-fs" Oct 8 20:02:08.945240 kubelet[2461]: E1008 20:02:08.945239 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" containerName="clean-cilium-state" Oct 8 20:02:08.945240 kubelet[2461]: I1008 20:02:08.945260 2461 memory_manager.go:354] "RemoveStaleState removing state" podUID="6386092f-c2a4-4ef1-a950-5152151491f5" containerName="cilium-operator" Oct 8 20:02:08.945240 kubelet[2461]: I1008 20:02:08.945267 2461 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6bb1ad-171a-406a-844e-20a50f1c74c3" containerName="cilium-agent" Oct 8 20:02:08.949566 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:02:08.960808 kubelet[2461]: I1008 20:02:08.959319 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-host-proc-sys-kernel\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.960808 kubelet[2461]: I1008 20:02:08.959357 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f3dbc4f-d865-4673-ae26-31b307514d26-clustermesh-secrets\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.960808 kubelet[2461]: I1008 20:02:08.959386 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-etc-cni-netd\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.960808 kubelet[2461]: I1008 20:02:08.959407 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-xtables-lock\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.960808 kubelet[2461]: I1008 20:02:08.959424 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-cilium-cgroup\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.960808 kubelet[2461]: I1008 20:02:08.959438 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-host-proc-sys-net\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961071 kubelet[2461]: I1008 20:02:08.959452 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-cilium-run\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961071 kubelet[2461]: I1008 20:02:08.959468 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-cni-path\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961071 kubelet[2461]: I1008 20:02:08.959484 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f3dbc4f-d865-4673-ae26-31b307514d26-hubble-tls\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961071 kubelet[2461]: I1008 20:02:08.959499 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-lib-modules\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961071 kubelet[2461]: I1008 20:02:08.959521 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f3dbc4f-d865-4673-ae26-31b307514d26-cilium-config-path\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961071 kubelet[2461]: I1008 20:02:08.959536 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0f3dbc4f-d865-4673-ae26-31b307514d26-cilium-ipsec-secrets\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961047 systemd[1]: Started sshd@24-10.0.0.138:22-10.0.0.1:57640.service - OpenSSH per-connection server daemon (10.0.0.1:57640). Oct 8 20:02:08.961323 kubelet[2461]: I1008 20:02:08.959552 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcg7z\" (UniqueName: \"kubernetes.io/projected/0f3dbc4f-d865-4673-ae26-31b307514d26-kube-api-access-tcg7z\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961323 kubelet[2461]: I1008 20:02:08.959568 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-bpf-maps\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.961323 kubelet[2461]: I1008 20:02:08.959582 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f3dbc4f-d865-4673-ae26-31b307514d26-hostproc\") pod \"cilium-mk77k\" (UID: \"0f3dbc4f-d865-4673-ae26-31b307514d26\") " pod="kube-system/cilium-mk77k" Oct 8 20:02:08.971813 systemd-logind[1425]: Removed session 24. Oct 8 20:02:08.972528 systemd[1]: Created slice kubepods-burstable-pod0f3dbc4f_d865_4673_ae26_31b307514d26.slice - libcontainer container kubepods-burstable-pod0f3dbc4f_d865_4673_ae26_31b307514d26.slice. Oct 8 20:02:09.006175 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 57640 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:09.007546 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:09.011067 systemd-logind[1425]: New session 25 of user core. Oct 8 20:02:09.021872 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:02:09.081855 sshd[4291]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:09.093116 systemd[1]: sshd@24-10.0.0.138:22-10.0.0.1:57640.service: Deactivated successfully. Oct 8 20:02:09.096278 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:02:09.098940 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:02:09.108971 systemd[1]: Started sshd@25-10.0.0.138:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). Oct 8 20:02:09.112248 systemd-logind[1425]: Removed session 25. Oct 8 20:02:09.147533 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:02:09.148473 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:09.153051 systemd-logind[1425]: New session 26 of user core. Oct 8 20:02:09.163874 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:02:09.278097 kubelet[2461]: E1008 20:02:09.278012 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:09.280224 containerd[1444]: time="2024-10-08T20:02:09.279956443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mk77k,Uid:0f3dbc4f-d865-4673-ae26-31b307514d26,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:09.299893 containerd[1444]: time="2024-10-08T20:02:09.299653560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:09.299893 containerd[1444]: time="2024-10-08T20:02:09.299720921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:09.299893 containerd[1444]: time="2024-10-08T20:02:09.299738361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:09.299893 containerd[1444]: time="2024-10-08T20:02:09.299823442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:09.318939 systemd[1]: Started cri-containerd-93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952.scope - libcontainer container 93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952. Oct 8 20:02:09.343704 containerd[1444]: time="2024-10-08T20:02:09.343232333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mk77k,Uid:0f3dbc4f-d865-4673-ae26-31b307514d26,Namespace:kube-system,Attempt:0,} returns sandbox id \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\"" Oct 8 20:02:09.345000 kubelet[2461]: E1008 20:02:09.344965 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:09.347463 containerd[1444]: time="2024-10-08T20:02:09.347428552Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:02:09.363869 containerd[1444]: time="2024-10-08T20:02:09.363827103Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c\"" Oct 8 20:02:09.364478 containerd[1444]: time="2024-10-08T20:02:09.364437351Z" level=info msg="StartContainer for \"2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c\"" Oct 8 20:02:09.385864 systemd[1]: Started cri-containerd-2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c.scope - libcontainer container 2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c. Oct 8 20:02:09.409664 containerd[1444]: time="2024-10-08T20:02:09.409477465Z" level=info msg="StartContainer for \"2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c\" returns successfully" Oct 8 20:02:09.433790 systemd[1]: cri-containerd-2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c.scope: Deactivated successfully. Oct 8 20:02:09.465119 containerd[1444]: time="2024-10-08T20:02:09.465060127Z" level=info msg="shim disconnected" id=2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c namespace=k8s.io Oct 8 20:02:09.465119 containerd[1444]: time="2024-10-08T20:02:09.465115808Z" level=warning msg="cleaning up after shim disconnected" id=2f9fd3682629d8f015e147f4acbf63a6ec5b9cb5d77e4261d3c5a449becf401c namespace=k8s.io Oct 8 20:02:09.465119 containerd[1444]: time="2024-10-08T20:02:09.465124488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:09.827733 kubelet[2461]: E1008 20:02:09.826930 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:09.830994 containerd[1444]: time="2024-10-08T20:02:09.829891740Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:02:09.850357 containerd[1444]: time="2024-10-08T20:02:09.850299947Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211\"" Oct 8 20:02:09.851216 containerd[1444]: time="2024-10-08T20:02:09.850943476Z" level=info msg="StartContainer for \"dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211\"" Oct 8 20:02:09.887901 systemd[1]: Started cri-containerd-dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211.scope - libcontainer container dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211. Oct 8 20:02:09.912247 containerd[1444]: time="2024-10-08T20:02:09.912204898Z" level=info msg="StartContainer for \"dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211\" returns successfully" Oct 8 20:02:09.919172 systemd[1]: cri-containerd-dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211.scope: Deactivated successfully. Oct 8 20:02:09.947384 containerd[1444]: time="2024-10-08T20:02:09.947328672Z" level=info msg="shim disconnected" id=dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211 namespace=k8s.io Oct 8 20:02:09.947384 containerd[1444]: time="2024-10-08T20:02:09.947384153Z" level=warning msg="cleaning up after shim disconnected" id=dea74154791a84f04b37171d722a6e2fd1fe44d082f3a7a79aaf2f7fb4f6e211 namespace=k8s.io Oct 8 20:02:09.947572 containerd[1444]: time="2024-10-08T20:02:09.947395113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:10.663275 kubelet[2461]: E1008 20:02:10.663096 2461 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:02:10.831797 kubelet[2461]: E1008 20:02:10.831582 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:10.836265 containerd[1444]: time="2024-10-08T20:02:10.835321981Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:02:10.854322 containerd[1444]: time="2024-10-08T20:02:10.854243038Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9\"" Oct 8 20:02:10.855887 containerd[1444]: time="2024-10-08T20:02:10.854962488Z" level=info msg="StartContainer for \"0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9\"" Oct 8 20:02:10.879840 systemd[1]: Started cri-containerd-0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9.scope - libcontainer container 0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9. Oct 8 20:02:10.911433 systemd[1]: cri-containerd-0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9.scope: Deactivated successfully. Oct 8 20:02:10.912599 containerd[1444]: time="2024-10-08T20:02:10.912566872Z" level=info msg="StartContainer for \"0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9\" returns successfully" Oct 8 20:02:10.935276 containerd[1444]: time="2024-10-08T20:02:10.935153820Z" level=info msg="shim disconnected" id=0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9 namespace=k8s.io Oct 8 20:02:10.935276 containerd[1444]: time="2024-10-08T20:02:10.935211580Z" level=warning msg="cleaning up after shim disconnected" id=0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9 namespace=k8s.io Oct 8 20:02:10.935276 containerd[1444]: time="2024-10-08T20:02:10.935220260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:11.068007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0190094f2fc15c88bc75a65bcb26d1a897afc5e38eb5bfb232374c19c5a028e9-rootfs.mount: Deactivated successfully. Oct 8 20:02:11.836798 kubelet[2461]: E1008 20:02:11.836435 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:11.840558 containerd[1444]: time="2024-10-08T20:02:11.840486323Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:02:11.852307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377950408.mount: Deactivated successfully. Oct 8 20:02:11.853195 containerd[1444]: time="2024-10-08T20:02:11.853156690Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6\"" Oct 8 20:02:11.853759 containerd[1444]: time="2024-10-08T20:02:11.853669857Z" level=info msg="StartContainer for \"dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6\"" Oct 8 20:02:11.891868 systemd[1]: Started cri-containerd-dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6.scope - libcontainer container dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6. Oct 8 20:02:11.913796 systemd[1]: cri-containerd-dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6.scope: Deactivated successfully. Oct 8 20:02:11.916079 containerd[1444]: time="2024-10-08T20:02:11.916033878Z" level=info msg="StartContainer for \"dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6\" returns successfully" Oct 8 20:02:11.941247 containerd[1444]: time="2024-10-08T20:02:11.941045527Z" level=info msg="shim disconnected" id=dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6 namespace=k8s.io Oct 8 20:02:11.941247 containerd[1444]: time="2024-10-08T20:02:11.941102368Z" level=warning msg="cleaning up after shim disconnected" id=dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6 namespace=k8s.io Oct 8 20:02:11.941247 containerd[1444]: time="2024-10-08T20:02:11.941111008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:12.068371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6961e1c1d756b8d5b7421971516d4db8d34a2f822d1cca8c1770607bccb7a6-rootfs.mount: Deactivated successfully. Oct 8 20:02:12.252710 kubelet[2461]: I1008 20:02:12.252553 2461 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T20:02:12Z","lastTransitionTime":"2024-10-08T20:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 20:02:12.840926 kubelet[2461]: E1008 20:02:12.840689 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:12.846275 containerd[1444]: time="2024-10-08T20:02:12.846133712Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:02:12.876148 containerd[1444]: time="2024-10-08T20:02:12.876084933Z" level=info msg="CreateContainer within sandbox \"93c81046783981021e49f1a732c2d717bbce317470ffcb3c56da235bd4d5c952\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a20bb5bad08482d3a366d9cd86cac0570cb6b0eb54b3b1013c04d19f1ed8115\"" Oct 8 20:02:12.876583 containerd[1444]: time="2024-10-08T20:02:12.876558099Z" level=info msg="StartContainer for \"5a20bb5bad08482d3a366d9cd86cac0570cb6b0eb54b3b1013c04d19f1ed8115\"" Oct 8 20:02:12.929831 systemd[1]: Started cri-containerd-5a20bb5bad08482d3a366d9cd86cac0570cb6b0eb54b3b1013c04d19f1ed8115.scope - libcontainer container 5a20bb5bad08482d3a366d9cd86cac0570cb6b0eb54b3b1013c04d19f1ed8115. Oct 8 20:02:12.953244 containerd[1444]: time="2024-10-08T20:02:12.953196195Z" level=info msg="StartContainer for \"5a20bb5bad08482d3a366d9cd86cac0570cb6b0eb54b3b1013c04d19f1ed8115\" returns successfully" Oct 8 20:02:13.211721 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 8 20:02:13.847848 kubelet[2461]: E1008 20:02:13.847779 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:13.870727 kubelet[2461]: I1008 20:02:13.870427 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mk77k" podStartSLOduration=5.870413943 podStartE2EDuration="5.870413943s" podCreationTimestamp="2024-10-08 20:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:13.862302484 +0000 UTC m=+83.346380819" watchObservedRunningTime="2024-10-08 20:02:13.870413943 +0000 UTC m=+83.354492278" Oct 8 20:02:15.278600 kubelet[2461]: E1008 20:02:15.278557 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:15.995234 systemd-networkd[1372]: lxc_health: Link UP Oct 8 20:02:16.014105 systemd-networkd[1372]: lxc_health: Gained carrier Oct 8 20:02:17.034468 systemd-networkd[1372]: lxc_health: Gained IPv6LL Oct 8 20:02:17.279516 kubelet[2461]: E1008 20:02:17.279469 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:17.856660 kubelet[2461]: E1008 20:02:17.856631 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:18.858699 kubelet[2461]: E1008 20:02:18.858647 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:19.619597 kubelet[2461]: E1008 20:02:19.619513 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:21.900175 sshd[4303]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:21.903572 systemd[1]: sshd@25-10.0.0.138:22-10.0.0.1:57644.service: Deactivated successfully. Oct 8 20:02:21.905284 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:02:21.907256 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:02:21.908225 systemd-logind[1425]: Removed session 26.