Sep 4 17:29:12.917171 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:29:12.917193 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:29:12.917202 kernel: KASLR enabled Sep 4 17:29:12.917208 kernel: efi: EFI v2.7 by EDK II Sep 4 17:29:12.917214 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:29:12.917220 kernel: random: crng init done Sep 4 17:29:12.917227 kernel: ACPI: Early table checksum verification disabled Sep 4 17:29:12.917233 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:29:12.917239 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:29:12.917246 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917252 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917258 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917264 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917270 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917278 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917285 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917292 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917298 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:12.917304 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:29:12.917311 kernel: NUMA: Failed to initialise from firmware Sep 4 17:29:12.917317 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:29:12.917324 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 4 17:29:12.917330 kernel: Zone ranges: Sep 4 17:29:12.917336 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:29:12.917342 kernel: DMA32 empty Sep 4 17:29:12.917350 kernel: Normal empty Sep 4 17:29:12.917356 kernel: Movable zone start for each node Sep 4 17:29:12.917362 kernel: Early memory node ranges Sep 4 17:29:12.917369 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:29:12.917375 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:29:12.917382 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:29:12.917388 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:29:12.917395 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:29:12.917401 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:29:12.917407 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:29:12.917414 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:29:12.917420 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:29:12.917428 kernel: psci: probing for conduit method from ACPI. Sep 4 17:29:12.917434 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:29:12.917441 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:29:12.917450 kernel: psci: Trusted OS migration not required Sep 4 17:29:12.917457 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:29:12.917464 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:29:12.917472 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:29:12.917479 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:29:12.917486 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:29:12.917493 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:29:12.917500 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:29:12.917507 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:29:12.917515 kernel: CPU features: detected: Spectre-v4 Sep 4 17:29:12.917533 kernel: CPU features: detected: Spectre-BHB Sep 4 17:29:12.917542 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:29:12.917548 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:29:12.917558 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:29:12.917565 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:29:12.917571 kernel: alternatives: applying boot alternatives Sep 4 17:29:12.917579 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:29:12.917586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:29:12.917593 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:29:12.917600 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:29:12.917606 kernel: Fallback order for Node 0: 0 Sep 4 17:29:12.917613 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:29:12.917619 kernel: Policy zone: DMA Sep 4 17:29:12.917626 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:29:12.917634 kernel: software IO TLB: area num 4. Sep 4 17:29:12.917641 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:29:12.917648 kernel: Memory: 2386848K/2572288K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 185440K reserved, 0K cma-reserved) Sep 4 17:29:12.917656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:29:12.917662 kernel: trace event string verifier disabled Sep 4 17:29:12.917669 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:29:12.917676 kernel: rcu: RCU event tracing is enabled. Sep 4 17:29:12.917683 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:29:12.917690 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:29:12.917697 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:29:12.917703 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:29:12.917710 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:29:12.917718 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:29:12.917732 kernel: GICv3: 256 SPIs implemented Sep 4 17:29:12.917739 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:29:12.917746 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:29:12.917752 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:29:12.917759 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:29:12.917766 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:29:12.917772 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:29:12.917779 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:29:12.917786 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:29:12.917793 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:29:12.917802 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:29:12.917809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:29:12.917816 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:29:12.917823 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:29:12.917829 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:29:12.917836 kernel: arm-pv: using stolen time PV Sep 4 17:29:12.917843 kernel: Console: colour dummy device 80x25 Sep 4 17:29:12.917850 kernel: ACPI: Core revision 20230628 Sep 4 17:29:12.917857 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:29:12.917864 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:29:12.917872 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:29:12.917879 kernel: SELinux: Initializing. Sep 4 17:29:12.917886 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:29:12.917893 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:29:12.917900 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:12.917906 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:12.917913 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:29:12.917920 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:29:12.917927 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:29:12.917934 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:29:12.917942 kernel: Remapping and enabling EFI services. Sep 4 17:29:12.917949 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:29:12.917956 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:29:12.917963 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:29:12.917970 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:29:12.917977 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:29:12.917984 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:29:12.917992 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:29:12.917999 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:29:12.918008 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:29:12.918015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:29:12.918027 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:29:12.918036 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:29:12.918044 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:29:12.918052 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:29:12.918059 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:29:12.918066 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:29:12.918074 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:29:12.918082 kernel: SMP: Total of 4 processors activated. Sep 4 17:29:12.918089 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:29:12.918097 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:29:12.918104 kernel: CPU features: detected: Common not Private translations Sep 4 17:29:12.918111 kernel: CPU features: detected: CRC32 instructions Sep 4 17:29:12.918119 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:29:12.918126 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:29:12.918133 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:29:12.918142 kernel: CPU features: detected: Privileged Access Never Sep 4 17:29:12.918149 kernel: CPU features: detected: RAS Extension Support Sep 4 17:29:12.918156 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:29:12.918163 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:29:12.918170 kernel: alternatives: applying system-wide alternatives Sep 4 17:29:12.918177 kernel: devtmpfs: initialized Sep 4 17:29:12.918185 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:29:12.918192 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:29:12.918199 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:29:12.918208 kernel: SMBIOS 3.0.0 present. Sep 4 17:29:12.918215 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:29:12.918222 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:29:12.918230 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:29:12.918246 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:29:12.918253 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:29:12.918261 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:29:12.918268 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 4 17:29:12.918276 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:29:12.918285 kernel: cpuidle: using governor menu Sep 4 17:29:12.918293 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:29:12.918300 kernel: ASID allocator initialised with 32768 entries Sep 4 17:29:12.918308 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:29:12.918315 kernel: Serial: AMBA PL011 UART driver Sep 4 17:29:12.918322 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:29:12.918330 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:29:12.918337 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:29:12.918344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:29:12.918353 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:29:12.918360 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:29:12.918367 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:29:12.918375 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:29:12.918382 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:29:12.918389 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:29:12.918397 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:29:12.918404 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:29:12.918411 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:29:12.918420 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:29:12.918427 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:29:12.918434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:29:12.918442 kernel: ACPI: Interpreter enabled Sep 4 17:29:12.918449 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:29:12.918456 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:29:12.918465 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:29:12.918473 kernel: printk: console [ttyAMA0] enabled Sep 4 17:29:12.918482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:29:12.918648 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:29:12.918733 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:29:12.918805 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:29:12.918872 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:29:12.918955 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:29:12.918966 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:29:12.918973 kernel: PCI host bridge to bus 0000:00 Sep 4 17:29:12.919050 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:29:12.919108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:29:12.919165 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:29:12.919221 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:29:12.919300 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:29:12.919379 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:29:12.919450 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:29:12.919519 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:29:12.919616 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:29:12.919682 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:29:12.919757 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:29:12.919823 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:29:12.919881 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:29:12.919938 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:29:12.919999 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:29:12.920009 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:29:12.920017 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:29:12.920025 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:29:12.920033 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:29:12.920040 kernel: iommu: Default domain type: Translated Sep 4 17:29:12.920048 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:29:12.920056 kernel: efivars: Registered efivars operations Sep 4 17:29:12.920065 kernel: vgaarb: loaded Sep 4 17:29:12.920073 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:29:12.920081 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:29:12.920089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:29:12.920096 kernel: pnp: PnP ACPI init Sep 4 17:29:12.920166 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:29:12.920177 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:29:12.920185 kernel: NET: Registered PF_INET protocol family Sep 4 17:29:12.920193 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:29:12.920201 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:29:12.920209 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:29:12.920216 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:29:12.920224 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:29:12.920231 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:29:12.920238 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:29:12.920245 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:29:12.920253 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:29:12.920262 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:29:12.920269 kernel: kvm [1]: HYP mode not available Sep 4 17:29:12.920276 kernel: Initialise system trusted keyrings Sep 4 17:29:12.920283 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:29:12.920291 kernel: Key type asymmetric registered Sep 4 17:29:12.920298 kernel: Asymmetric key parser 'x509' registered Sep 4 17:29:12.920305 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:29:12.920313 kernel: io scheduler mq-deadline registered Sep 4 17:29:12.920320 kernel: io scheduler kyber registered Sep 4 17:29:12.920329 kernel: io scheduler bfq registered Sep 4 17:29:12.920337 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:29:12.920344 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:29:12.920352 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:29:12.920418 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:29:12.920428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:29:12.920436 kernel: thunder_xcv, ver 1.0 Sep 4 17:29:12.920443 kernel: thunder_bgx, ver 1.0 Sep 4 17:29:12.920451 kernel: nicpf, ver 1.0 Sep 4 17:29:12.920460 kernel: nicvf, ver 1.0 Sep 4 17:29:12.920548 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:29:12.920613 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:29:12 UTC (1725470952) Sep 4 17:29:12.920623 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:29:12.920631 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:29:12.920638 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:29:12.920646 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:29:12.920653 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:29:12.920662 kernel: Segment Routing with IPv6 Sep 4 17:29:12.920670 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:29:12.920677 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:29:12.920685 kernel: Key type dns_resolver registered Sep 4 17:29:12.920692 kernel: registered taskstats version 1 Sep 4 17:29:12.920699 kernel: Loading compiled-in X.509 certificates Sep 4 17:29:12.920707 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:29:12.920714 kernel: Key type .fscrypt registered Sep 4 17:29:12.920728 kernel: Key type fscrypt-provisioning registered Sep 4 17:29:12.920738 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:29:12.920746 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:29:12.920753 kernel: ima: No architecture policies found Sep 4 17:29:12.920760 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:29:12.920767 kernel: clk: Disabling unused clocks Sep 4 17:29:12.920775 kernel: Freeing unused kernel memory: 39040K Sep 4 17:29:12.920782 kernel: Run /init as init process Sep 4 17:29:12.920789 kernel: with arguments: Sep 4 17:29:12.920796 kernel: /init Sep 4 17:29:12.920805 kernel: with environment: Sep 4 17:29:12.920812 kernel: HOME=/ Sep 4 17:29:12.920819 kernel: TERM=linux Sep 4 17:29:12.920826 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:29:12.920835 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:12.920845 systemd[1]: Detected virtualization kvm. Sep 4 17:29:12.920853 systemd[1]: Detected architecture arm64. Sep 4 17:29:12.920860 systemd[1]: Running in initrd. Sep 4 17:29:12.920870 systemd[1]: No hostname configured, using default hostname. Sep 4 17:29:12.920877 systemd[1]: Hostname set to . Sep 4 17:29:12.920885 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:29:12.920893 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:29:12.920900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:12.920908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:12.920917 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:29:12.920925 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:12.920934 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:29:12.920942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:29:12.920952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:29:12.920960 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:29:12.920968 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:12.920977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:12.920986 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:12.920995 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:12.921003 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:12.921011 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:12.921019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:12.921027 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:12.921036 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:12.921044 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:12.921052 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:12.921062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:12.921070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:12.921078 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:12.921086 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:29:12.921094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:12.921102 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:29:12.921109 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:29:12.921117 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:12.921125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:12.921134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:12.921142 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:12.921150 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:12.921158 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:29:12.921166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:12.921176 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:12.921184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:12.921192 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:29:12.921217 systemd-journald[237]: Collecting audit messages is disabled. Sep 4 17:29:12.921238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:12.921246 kernel: Bridge firewalling registered Sep 4 17:29:12.921253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:12.921263 systemd-journald[237]: Journal started Sep 4 17:29:12.921280 systemd-journald[237]: Runtime Journal (/run/log/journal/4c79cfa55e3c4b92abf407d50ff2a1e1) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:29:12.893100 systemd-modules-load[239]: Inserted module 'overlay' Sep 4 17:29:12.924275 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:12.918333 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 4 17:29:12.923773 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:12.934738 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:12.936663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:12.938670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:12.946686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:12.950482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:12.974778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:12.975800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:12.978635 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:29:12.992756 dracut-cmdline[280]: dracut-dracut-053 Sep 4 17:29:12.995370 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:29:13.005550 systemd-resolved[277]: Positive Trust Anchors: Sep 4 17:29:13.005565 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:13.005596 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:13.010391 systemd-resolved[277]: Defaulting to hostname 'linux'. Sep 4 17:29:13.011395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:13.014935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:13.068552 kernel: SCSI subsystem initialized Sep 4 17:29:13.073540 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:29:13.080547 kernel: iscsi: registered transport (tcp) Sep 4 17:29:13.094556 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:29:13.094576 kernel: QLogic iSCSI HBA Driver Sep 4 17:29:13.140604 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:13.147704 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:29:13.165732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:29:13.165811 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:29:13.165831 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:29:13.211553 kernel: raid6: neonx8 gen() 15779 MB/s Sep 4 17:29:13.228539 kernel: raid6: neonx4 gen() 15682 MB/s Sep 4 17:29:13.245536 kernel: raid6: neonx2 gen() 13243 MB/s Sep 4 17:29:13.262538 kernel: raid6: neonx1 gen() 10517 MB/s Sep 4 17:29:13.279536 kernel: raid6: int64x8 gen() 6953 MB/s Sep 4 17:29:13.296538 kernel: raid6: int64x4 gen() 7357 MB/s Sep 4 17:29:13.313539 kernel: raid6: int64x2 gen() 6131 MB/s Sep 4 17:29:13.330543 kernel: raid6: int64x1 gen() 5050 MB/s Sep 4 17:29:13.330564 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Sep 4 17:29:13.347545 kernel: raid6: .... xor() 11907 MB/s, rmw enabled Sep 4 17:29:13.347564 kernel: raid6: using neon recovery algorithm Sep 4 17:29:13.352540 kernel: xor: measuring software checksum speed Sep 4 17:29:13.353536 kernel: 8regs : 19854 MB/sec Sep 4 17:29:13.353549 kernel: 32regs : 19673 MB/sec Sep 4 17:29:13.354652 kernel: arm64_neon : 27170 MB/sec Sep 4 17:29:13.354664 kernel: xor: using function: arm64_neon (27170 MB/sec) Sep 4 17:29:13.412544 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:29:13.434037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:13.449793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:13.465819 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 4 17:29:13.469112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:13.475682 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:29:13.491151 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 4 17:29:13.523971 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:13.535735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:13.588085 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:13.596685 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:29:13.611097 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:13.612777 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:13.614481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:13.617350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:13.626768 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:29:13.639923 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:13.643796 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:29:13.644225 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:29:13.651754 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:29:13.651780 kernel: GPT:9289727 != 19775487 Sep 4 17:29:13.651790 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:29:13.651800 kernel: GPT:9289727 != 19775487 Sep 4 17:29:13.652295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:13.653927 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:29:13.652411 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:13.656640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:13.656674 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:13.657549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:13.657689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:13.660838 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:13.671823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:13.680562 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (515) Sep 4 17:29:13.683411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:29:13.686003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:13.689572 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Sep 4 17:29:13.701432 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:29:13.705396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:29:13.706711 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:29:13.712730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:29:13.725672 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:29:13.727269 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:13.732268 disk-uuid[550]: Primary Header is updated. Sep 4 17:29:13.732268 disk-uuid[550]: Secondary Entries is updated. Sep 4 17:29:13.732268 disk-uuid[550]: Secondary Header is updated. Sep 4 17:29:13.735548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:13.750996 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:14.750546 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:14.751408 disk-uuid[551]: The operation has completed successfully. Sep 4 17:29:14.777408 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:29:14.777507 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:29:14.797711 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:29:14.801767 sh[573]: Success Sep 4 17:29:14.814551 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:29:14.861727 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:29:14.876977 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:29:14.878548 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:29:14.893047 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:29:14.893096 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:29:14.893107 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:29:14.893126 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:29:14.893677 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:29:14.899917 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:29:14.901130 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:29:14.914691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:29:14.916276 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:29:14.924894 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:29:14.924950 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:29:14.924962 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:29:14.928138 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:29:14.936119 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:29:14.938082 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:29:14.942730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:29:14.951766 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:29:15.025057 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:15.041757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:15.054873 ignition[663]: Ignition 2.18.0 Sep 4 17:29:15.054884 ignition[663]: Stage: fetch-offline Sep 4 17:29:15.055017 ignition[663]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:15.055028 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:15.055243 ignition[663]: parsed url from cmdline: "" Sep 4 17:29:15.055248 ignition[663]: no config URL provided Sep 4 17:29:15.055259 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:15.055271 ignition[663]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:15.055305 ignition[663]: op(1): [started] loading QEMU firmware config module Sep 4 17:29:15.055311 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:29:15.067183 systemd-networkd[764]: lo: Link UP Sep 4 17:29:15.067195 systemd-networkd[764]: lo: Gained carrier Sep 4 17:29:15.067963 systemd-networkd[764]: Enumeration completed Sep 4 17:29:15.070885 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:15.071634 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:15.071638 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:15.075583 ignition[663]: op(1): [finished] loading QEMU firmware config module Sep 4 17:29:15.071822 systemd[1]: Reached target network.target - Network. Sep 4 17:29:15.073700 systemd-networkd[764]: eth0: Link UP Sep 4 17:29:15.073704 systemd-networkd[764]: eth0: Gained carrier Sep 4 17:29:15.073718 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:15.099577 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:29:15.121065 ignition[663]: parsing config with SHA512: 6f85cfd2863e404214897a07fab5a4c654a67e3e001efab4d7af5aa58a502310fa5a091972b69f62b8a15d7d5605748aafeb8ea4a42a32b7c4178d560f47f6ee Sep 4 17:29:15.125790 unknown[663]: fetched base config from "system" Sep 4 17:29:15.125800 unknown[663]: fetched user config from "qemu" Sep 4 17:29:15.126272 ignition[663]: fetch-offline: fetch-offline passed Sep 4 17:29:15.126326 ignition[663]: Ignition finished successfully Sep 4 17:29:15.128070 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:15.129810 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:29:15.144850 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:29:15.157345 ignition[771]: Ignition 2.18.0 Sep 4 17:29:15.157358 ignition[771]: Stage: kargs Sep 4 17:29:15.157616 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:15.157627 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:15.158928 ignition[771]: kargs: kargs passed Sep 4 17:29:15.158982 ignition[771]: Ignition finished successfully Sep 4 17:29:15.161948 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:29:15.175798 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:29:15.187240 ignition[780]: Ignition 2.18.0 Sep 4 17:29:15.187250 ignition[780]: Stage: disks Sep 4 17:29:15.187419 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:15.187430 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:15.188392 ignition[780]: disks: disks passed Sep 4 17:29:15.188440 ignition[780]: Ignition finished successfully Sep 4 17:29:15.191605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:29:15.193005 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:15.194361 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:15.196353 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:15.198093 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:15.200045 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:15.210770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:29:15.224297 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:29:15.229814 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:29:15.238653 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:29:15.309581 kernel: EXT4-fs (vda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:29:15.310106 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:29:15.311333 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:15.326630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:15.328486 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:29:15.329657 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:29:15.329702 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:29:15.329734 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:15.340446 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Sep 4 17:29:15.337685 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:29:15.344496 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:29:15.344517 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:29:15.344546 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:29:15.339573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:29:15.347549 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:29:15.349165 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:15.394497 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:29:15.399212 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:29:15.403592 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:29:15.407405 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:29:15.489360 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:15.503677 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:29:15.505288 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:29:15.510810 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:29:15.531377 ignition[913]: INFO : Ignition 2.18.0 Sep 4 17:29:15.531377 ignition[913]: INFO : Stage: mount Sep 4 17:29:15.533274 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:15.533274 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:15.533274 ignition[913]: INFO : mount: mount passed Sep 4 17:29:15.533274 ignition[913]: INFO : Ignition finished successfully Sep 4 17:29:15.533291 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:29:15.536743 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:29:15.546674 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:29:15.890435 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:29:15.910714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:15.918218 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Sep 4 17:29:15.918261 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:29:15.918273 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:29:15.918894 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:29:15.922545 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:29:15.923085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:15.943075 ignition[944]: INFO : Ignition 2.18.0 Sep 4 17:29:15.943075 ignition[944]: INFO : Stage: files Sep 4 17:29:15.944442 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:15.944442 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:15.944442 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:29:15.947658 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:29:15.947658 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:29:15.947658 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:29:15.947658 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:29:15.951732 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:29:15.951732 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:29:15.951732 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:29:15.948099 unknown[944]: wrote ssh authorized keys file for user: core Sep 4 17:29:15.994938 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:29:16.035584 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:29:16.037663 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:29:16.037663 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 17:29:16.292048 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:29:16.360699 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:29:16.362725 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:29:16.539234 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:29:16.854813 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:29:16.854813 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:29:16.859285 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:29:16.880176 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:29:16.884631 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:29:16.886171 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:29:16.886171 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:16.886171 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:16.886171 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:16.886171 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:16.886171 ignition[944]: INFO : files: files passed Sep 4 17:29:16.886171 ignition[944]: INFO : Ignition finished successfully Sep 4 17:29:16.887057 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:29:16.899718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:29:16.902750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:29:16.905415 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:29:16.905510 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:29:16.911229 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:29:16.914400 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:16.914400 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:16.916842 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:16.922571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:16.924001 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:29:16.937667 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:29:16.965645 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:29:16.965770 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:29:16.967016 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:29:16.967903 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:29:16.968664 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:29:16.974600 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:29:16.990567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:17.002710 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:29:17.013206 systemd[1]: Stopped target network.target - Network. Sep 4 17:29:17.014162 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:17.015663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:17.017430 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:29:17.019125 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:29:17.019252 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:17.021322 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:29:17.022425 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:29:17.024226 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:29:17.025768 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:17.027233 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:17.028698 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:29:17.030315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:17.031998 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:29:17.033535 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:29:17.035383 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:29:17.036595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:29:17.036720 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:17.038908 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:17.040564 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:17.042346 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:29:17.045589 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:17.046735 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:29:17.046857 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:17.049280 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:29:17.049389 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:17.051564 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:29:17.053060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:29:17.053177 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:17.054820 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:29:17.056156 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:29:17.057514 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:29:17.057618 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:17.059492 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:29:17.059581 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:17.061052 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:29:17.061162 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:17.062775 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:29:17.062877 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:29:17.076731 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:29:17.077495 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:29:17.077641 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:17.082743 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:29:17.083583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:29:17.087907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:29:17.089851 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:29:17.089986 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:17.091012 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 4 17:29:17.094255 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:29:17.094481 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:17.100143 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:29:17.100256 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:29:17.109887 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:29:17.118850 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:29:17.124059 ignition[999]: INFO : Ignition 2.18.0 Sep 4 17:29:17.124059 ignition[999]: INFO : Stage: umount Sep 4 17:29:17.124059 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:17.124059 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:17.124059 ignition[999]: INFO : umount: umount passed Sep 4 17:29:17.124059 ignition[999]: INFO : Ignition finished successfully Sep 4 17:29:17.118894 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:17.132681 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:29:17.134323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:29:17.134393 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:17.135945 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:29:17.136037 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:29:17.140876 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:29:17.140970 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:29:17.142987 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:29:17.143078 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:29:17.146347 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:29:17.146411 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:29:17.148419 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:29:17.148481 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:29:17.150033 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:29:17.150081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:29:17.151687 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:29:17.151745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:17.153309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:17.153360 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:17.154849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:29:17.154892 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:17.156574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:29:17.156619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:17.158640 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:17.160499 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:29:17.161383 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:29:17.184255 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:29:17.184398 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:17.186753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:29:17.186795 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:17.187653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:29:17.187681 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:17.190390 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:29:17.190441 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:17.193163 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:29:17.193206 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:17.195354 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:17.195392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:17.206675 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:29:17.207516 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:29:17.207591 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:17.209480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:17.209519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:17.211669 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:29:17.211776 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:29:17.213357 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:29:17.213435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:29:17.215542 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:29:17.216442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:29:17.216511 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:17.218996 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:29:17.229102 systemd[1]: Switching root. Sep 4 17:29:17.254476 systemd-journald[237]: Journal stopped Sep 4 17:29:17.962520 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 4 17:29:17.962598 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:29:17.962612 kernel: SELinux: policy capability open_perms=1 Sep 4 17:29:17.962622 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:29:17.962633 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:29:17.962643 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:29:17.962654 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:29:17.962667 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:29:17.962683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:29:17.962693 kernel: audit: type=1403 audit(1725470957.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:29:17.962717 systemd[1]: Successfully loaded SELinux policy in 30.310ms. Sep 4 17:29:17.962739 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.295ms. Sep 4 17:29:17.962751 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:17.962763 systemd[1]: Detected virtualization kvm. Sep 4 17:29:17.962774 systemd[1]: Detected architecture arm64. Sep 4 17:29:17.962784 systemd[1]: Detected first boot. Sep 4 17:29:17.962797 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:29:17.962807 zram_generator::config[1044]: No configuration found. Sep 4 17:29:17.962818 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:29:17.962829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:29:17.962840 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:29:17.962853 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:17.962864 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:29:17.962875 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:29:17.962887 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:29:17.962898 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:29:17.962909 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:29:17.962920 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:29:17.962930 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:29:17.962941 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:29:17.962955 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:17.962966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:17.962976 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:29:17.962987 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:29:17.962998 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:29:17.963009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:17.963019 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:29:17.963029 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:17.963039 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:29:17.963050 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:29:17.963060 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:17.963072 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:29:17.963083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:17.963093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:17.963103 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:17.963114 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:17.963124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:29:17.963135 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:29:17.963145 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:17.963156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:17.963168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:17.963178 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:29:17.963188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:29:17.963198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:29:17.963208 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:29:17.963218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:29:17.963228 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:29:17.963239 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:29:17.963249 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:29:17.963262 systemd[1]: Reached target machines.target - Containers. Sep 4 17:29:17.963272 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:29:17.963282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:17.963292 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:17.963303 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:29:17.963313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:17.963323 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:17.963334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:17.963345 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:29:17.963356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:17.963367 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:29:17.963377 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:29:17.963388 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:29:17.963398 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:29:17.963408 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:29:17.963418 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:17.963428 kernel: loop: module loaded Sep 4 17:29:17.963439 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:17.963449 kernel: fuse: init (API version 7.39) Sep 4 17:29:17.963459 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:29:17.963469 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:29:17.963479 kernel: ACPI: bus type drm_connector registered Sep 4 17:29:17.963488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:17.963499 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:29:17.963509 systemd[1]: Stopped verity-setup.service. Sep 4 17:29:17.963520 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:29:17.963562 systemd-journald[1107]: Collecting audit messages is disabled. Sep 4 17:29:17.963585 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:29:17.963596 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:29:17.963608 systemd-journald[1107]: Journal started Sep 4 17:29:17.963630 systemd-journald[1107]: Runtime Journal (/run/log/journal/4c79cfa55e3c4b92abf407d50ff2a1e1) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:29:17.763912 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:29:17.782585 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:29:17.782960 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:29:17.967212 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:17.967549 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:29:17.968971 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:29:17.970463 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:29:17.972130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:17.973672 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:29:17.973850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:29:17.975269 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:29:17.976557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:17.976713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:17.977784 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:17.977916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:17.979037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:17.979170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:17.980563 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:29:17.980694 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:29:17.981814 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:17.981952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:17.983124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:17.984446 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:29:17.986045 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:29:17.999949 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:29:18.017706 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:29:18.019764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:29:18.020911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:29:18.020964 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:18.022916 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:29:18.025203 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:29:18.027624 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:29:18.028561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:18.031029 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:29:18.033233 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:29:18.034607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:18.037698 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:29:18.039008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:18.040722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:18.043287 systemd-journald[1107]: Time spent on flushing to /var/log/journal/4c79cfa55e3c4b92abf407d50ff2a1e1 is 31.916ms for 855 entries. Sep 4 17:29:18.043287 systemd-journald[1107]: System Journal (/var/log/journal/4c79cfa55e3c4b92abf407d50ff2a1e1) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:29:18.099493 systemd-journald[1107]: Received client request to flush runtime journal. Sep 4 17:29:18.099617 kernel: loop0: detected capacity change from 0 to 113672 Sep 4 17:29:18.099644 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:29:18.045888 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:29:18.049736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:29:18.053754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:18.056913 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:29:18.058421 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:29:18.060177 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:29:18.061983 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:29:18.067906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:18.069313 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:29:18.084821 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:29:18.090352 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:29:18.103647 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:29:18.110272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:29:18.116355 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:29:18.132871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:18.134906 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:29:18.135940 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:29:18.138577 kernel: loop1: detected capacity change from 0 to 194096 Sep 4 17:29:18.140717 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:29:18.153642 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 4 17:29:18.153660 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 4 17:29:18.158399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:18.190668 kernel: loop2: detected capacity change from 0 to 59688 Sep 4 17:29:18.240548 kernel: loop3: detected capacity change from 0 to 113672 Sep 4 17:29:18.247557 kernel: loop4: detected capacity change from 0 to 194096 Sep 4 17:29:18.255538 kernel: loop5: detected capacity change from 0 to 59688 Sep 4 17:29:18.260103 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:29:18.261046 (sd-merge)[1178]: Merged extensions into '/usr'. Sep 4 17:29:18.265079 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:29:18.265093 systemd[1]: Reloading... Sep 4 17:29:18.312561 zram_generator::config[1204]: No configuration found. Sep 4 17:29:18.409366 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:29:18.430993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:18.469254 systemd[1]: Reloading finished in 203 ms. Sep 4 17:29:18.502644 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:29:18.504030 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:29:18.526825 systemd[1]: Starting ensure-sysext.service... Sep 4 17:29:18.528776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:18.536561 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:29:18.536576 systemd[1]: Reloading... Sep 4 17:29:18.548191 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:29:18.548824 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:29:18.549611 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:29:18.549934 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 4 17:29:18.550051 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Sep 4 17:29:18.552265 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:18.552372 systemd-tmpfiles[1240]: Skipping /boot Sep 4 17:29:18.559326 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:18.559451 systemd-tmpfiles[1240]: Skipping /boot Sep 4 17:29:18.587558 zram_generator::config[1266]: No configuration found. Sep 4 17:29:18.666379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:18.703902 systemd[1]: Reloading finished in 167 ms. Sep 4 17:29:18.718285 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:29:18.729953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:18.737348 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:18.739949 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:29:18.742360 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:29:18.747838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:18.752567 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:18.759665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:29:18.762858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:18.764627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:18.768801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:18.775810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:18.776788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:18.781821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:29:18.785571 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:29:18.786998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:18.787129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:18.788980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:18.789106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:18.790838 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:18.790966 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:18.792998 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Sep 4 17:29:18.799971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:18.805929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:18.809815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:18.814031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:18.815049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:18.817951 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:29:18.820572 augenrules[1332]: No rules Sep 4 17:29:18.821053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:18.830696 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:18.833111 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:29:18.836258 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:29:18.838437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:18.839055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:18.842047 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:29:18.845147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:18.845329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:18.847419 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:18.848054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:18.863554 systemd[1]: Finished ensure-sysext.service. Sep 4 17:29:18.872677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:18.877035 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1353) Sep 4 17:29:18.882120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:18.884797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:18.888324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:18.891949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:18.896956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:18.897542 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1351) Sep 4 17:29:18.902910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:18.908518 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:29:18.910386 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:29:18.911141 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:29:18.913244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:18.913372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:18.914724 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:18.914852 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:18.916202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:18.916353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:18.917849 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:18.918023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:18.918983 systemd-resolved[1306]: Positive Trust Anchors: Sep 4 17:29:18.919236 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:18.919275 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:29:18.919357 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:18.925889 systemd-resolved[1306]: Defaulting to hostname 'linux'. Sep 4 17:29:18.934359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:18.935987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:18.937738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:18.937798 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:18.950237 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:29:18.963760 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:29:18.976120 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:29:18.977707 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:29:18.986557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:29:18.989739 systemd-networkd[1375]: lo: Link UP Sep 4 17:29:18.989746 systemd-networkd[1375]: lo: Gained carrier Sep 4 17:29:18.990404 systemd-networkd[1375]: Enumeration completed Sep 4 17:29:18.990503 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:18.991791 systemd[1]: Reached target network.target - Network. Sep 4 17:29:18.994773 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:18.994784 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:18.997755 systemd-networkd[1375]: eth0: Link UP Sep 4 17:29:18.997758 systemd-networkd[1375]: eth0: Gained carrier Sep 4 17:29:18.997772 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:18.999757 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:29:19.010100 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:19.012285 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:29:19.013347 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 4 17:29:18.531189 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:29:18.539825 systemd-journald[1107]: Time jumped backwards, rotating. Sep 4 17:29:18.531244 systemd-timesyncd[1377]: Initial clock synchronization to Wed 2024-09-04 17:29:18.531094 UTC. Sep 4 17:29:18.531457 systemd-resolved[1306]: Clock change detected. Flushing caches. Sep 4 17:29:18.532680 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:29:18.535219 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:29:18.568725 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:18.588929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:18.603855 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:29:18.605254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:18.607467 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:18.608427 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:29:18.609312 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:29:18.610472 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:29:18.611413 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:29:18.612655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:29:18.613752 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:29:18.613790 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:18.614462 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:18.616052 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:29:18.618539 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:29:18.625268 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:29:18.627694 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:29:18.629136 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:29:18.630411 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:18.631201 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:18.632177 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:18.632217 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:18.633234 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:29:18.635403 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:29:18.638487 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:18.639532 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:29:18.642842 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:29:18.644636 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:29:18.646585 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:29:18.651495 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:29:18.654820 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:29:18.657319 jq[1410]: false Sep 4 17:29:18.658675 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:29:18.664873 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:29:18.668165 extend-filesystems[1411]: Found loop3 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found loop4 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found loop5 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda1 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda2 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda3 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found usr Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda4 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda6 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda7 Sep 4 17:29:18.670109 extend-filesystems[1411]: Found vda9 Sep 4 17:29:18.670109 extend-filesystems[1411]: Checking size of /dev/vda9 Sep 4 17:29:18.670708 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:29:18.671200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:29:18.680599 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:29:18.682826 dbus-daemon[1409]: [system] SELinux support is enabled Sep 4 17:29:18.687753 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:29:18.689416 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:29:18.693575 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:29:18.693709 extend-filesystems[1411]: Resized partition /dev/vda9 Sep 4 17:29:18.699415 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1343) Sep 4 17:29:18.699070 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:29:18.699247 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:29:18.699589 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:29:18.699735 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:29:18.702900 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:29:18.703064 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:29:18.714242 jq[1431]: true Sep 4 17:29:18.724043 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:29:18.724091 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:29:18.726006 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:29:18.727074 update_engine[1425]: I0904 17:29:18.726754 1425 main.cc:92] Flatcar Update Engine starting Sep 4 17:29:18.726038 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:29:18.732315 extend-filesystems[1434]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:29:18.734209 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:29:18.739011 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:29:18.739094 tar[1435]: linux-arm64/helm Sep 4 17:29:18.742667 update_engine[1425]: I0904 17:29:18.739580 1425 update_check_scheduler.cc:74] Next update check in 6m7s Sep 4 17:29:18.744049 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:29:18.744399 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:29:18.755503 jq[1442]: true Sep 4 17:29:18.767517 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:29:18.767381 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:29:18.768398 systemd-logind[1420]: New seat seat0. Sep 4 17:29:18.779914 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:29:18.793198 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:29:18.793198 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:29:18.793198 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:29:18.802213 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Sep 4 17:29:18.796035 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:29:18.796298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:29:18.835246 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:29:18.837323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:29:18.837696 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:29:18.839257 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:29:19.058055 containerd[1443]: time="2024-09-04T17:29:19.057916935Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:29:19.084818 containerd[1443]: time="2024-09-04T17:29:19.084498695Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:29:19.084818 containerd[1443]: time="2024-09-04T17:29:19.084557455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.085922935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.085955455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086164015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086180215Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086248335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086294895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086306575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086381735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086572015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086589735Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:29:19.086912 containerd[1443]: time="2024-09-04T17:29:19.086599335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:19.087170 containerd[1443]: time="2024-09-04T17:29:19.086687695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:19.087170 containerd[1443]: time="2024-09-04T17:29:19.086701695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:29:19.087170 containerd[1443]: time="2024-09-04T17:29:19.086748655Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:29:19.087170 containerd[1443]: time="2024-09-04T17:29:19.086761175Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:29:19.090385 containerd[1443]: time="2024-09-04T17:29:19.090360455Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:29:19.090577 containerd[1443]: time="2024-09-04T17:29:19.090558175Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:29:19.090637 containerd[1443]: time="2024-09-04T17:29:19.090625615Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:29:19.090710 containerd[1443]: time="2024-09-04T17:29:19.090698135Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:29:19.090880 containerd[1443]: time="2024-09-04T17:29:19.090864015Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:29:19.090947 containerd[1443]: time="2024-09-04T17:29:19.090933575Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:29:19.090999 containerd[1443]: time="2024-09-04T17:29:19.090986615Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:29:19.091228 containerd[1443]: time="2024-09-04T17:29:19.091210015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:29:19.091307 containerd[1443]: time="2024-09-04T17:29:19.091293255Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:29:19.091437 containerd[1443]: time="2024-09-04T17:29:19.091368815Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:29:19.091498 containerd[1443]: time="2024-09-04T17:29:19.091484415Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:29:19.091579 containerd[1443]: time="2024-09-04T17:29:19.091564935Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.091685 containerd[1443]: time="2024-09-04T17:29:19.091669695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.091753 containerd[1443]: time="2024-09-04T17:29:19.091738815Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.091803 containerd[1443]: time="2024-09-04T17:29:19.091792095Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.091862 containerd[1443]: time="2024-09-04T17:29:19.091849375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.091964 containerd[1443]: time="2024-09-04T17:29:19.091949615Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.092025 containerd[1443]: time="2024-09-04T17:29:19.092011735Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.092087 containerd[1443]: time="2024-09-04T17:29:19.092066135Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:29:19.092392 containerd[1443]: time="2024-09-04T17:29:19.092305335Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:29:19.093065 containerd[1443]: time="2024-09-04T17:29:19.092833775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:29:19.093065 containerd[1443]: time="2024-09-04T17:29:19.092870655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.093065 containerd[1443]: time="2024-09-04T17:29:19.092885735Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:29:19.093065 containerd[1443]: time="2024-09-04T17:29:19.092910495Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:29:19.093281 containerd[1443]: time="2024-09-04T17:29:19.093262575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.093436 containerd[1443]: time="2024-09-04T17:29:19.093418415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.093578 containerd[1443]: time="2024-09-04T17:29:19.093561175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093628295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093648935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093672015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093685295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093703615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093718015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093849895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093868615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093881895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093895375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093908055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093921335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093933535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094368 containerd[1443]: time="2024-09-04T17:29:19.093943895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:29:19.094681 containerd[1443]: time="2024-09-04T17:29:19.094246375Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:29:19.094681 containerd[1443]: time="2024-09-04T17:29:19.094302055Z" level=info msg="Connect containerd service" Sep 4 17:29:19.094930 containerd[1443]: time="2024-09-04T17:29:19.094328695Z" level=info msg="using legacy CRI server" Sep 4 17:29:19.094992 containerd[1443]: time="2024-09-04T17:29:19.094974415Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:29:19.095249 containerd[1443]: time="2024-09-04T17:29:19.095228695Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:29:19.096530 containerd[1443]: time="2024-09-04T17:29:19.096431735Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:29:19.096644 containerd[1443]: time="2024-09-04T17:29:19.096628375Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:29:19.096910 containerd[1443]: time="2024-09-04T17:29:19.096889655Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:29:19.097103 containerd[1443]: time="2024-09-04T17:29:19.097030895Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:29:19.097103 containerd[1443]: time="2024-09-04T17:29:19.097054375Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:29:19.097227 containerd[1443]: time="2024-09-04T17:29:19.096856695Z" level=info msg="Start subscribing containerd event" Sep 4 17:29:19.097227 containerd[1443]: time="2024-09-04T17:29:19.097253975Z" level=info msg="Start recovering state" Sep 4 17:29:19.097703 containerd[1443]: time="2024-09-04T17:29:19.097326095Z" level=info msg="Start event monitor" Sep 4 17:29:19.097703 containerd[1443]: time="2024-09-04T17:29:19.097362815Z" level=info msg="Start snapshots syncer" Sep 4 17:29:19.097703 containerd[1443]: time="2024-09-04T17:29:19.097373815Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:29:19.097703 containerd[1443]: time="2024-09-04T17:29:19.097381175Z" level=info msg="Start streaming server" Sep 4 17:29:19.098063 containerd[1443]: time="2024-09-04T17:29:19.098043615Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:29:19.098230 containerd[1443]: time="2024-09-04T17:29:19.098215295Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:29:19.098495 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:29:19.099934 containerd[1443]: time="2024-09-04T17:29:19.099832415Z" level=info msg="containerd successfully booted in 0.042908s" Sep 4 17:29:19.134618 tar[1435]: linux-arm64/LICENSE Sep 4 17:29:19.134618 tar[1435]: linux-arm64/README.md Sep 4 17:29:19.146903 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:29:19.259496 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:29:19.278796 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:29:19.290699 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:29:19.296235 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:29:19.298396 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:29:19.300964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:29:19.314430 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:29:19.317594 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:29:19.319899 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:29:19.321357 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:29:19.962457 systemd-networkd[1375]: eth0: Gained IPv6LL Sep 4 17:29:19.965775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:29:19.967649 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:29:19.982649 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:29:19.988260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:19.990654 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:29:20.008602 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:29:20.008835 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:29:20.010555 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:29:20.022272 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:29:20.501375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:20.502777 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:29:20.505657 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:20.508985 systemd[1]: Startup finished in 572ms (kernel) + 4.712s (initrd) + 3.612s (userspace) = 8.897s. Sep 4 17:29:20.979703 kubelet[1522]: E0904 17:29:20.979596 1522 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:20.982393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:20.982602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:25.537324 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:29:25.538620 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:42778.service - OpenSSH per-connection server daemon (10.0.0.1:42778). Sep 4 17:29:25.586300 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 42778 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:25.588144 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:25.599827 systemd-logind[1420]: New session 1 of user core. Sep 4 17:29:25.600869 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:29:25.609617 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:29:25.623800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:29:25.627781 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:29:25.633080 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:25.722966 systemd[1540]: Queued start job for default target default.target. Sep 4 17:29:25.732328 systemd[1540]: Created slice app.slice - User Application Slice. Sep 4 17:29:25.732377 systemd[1540]: Reached target paths.target - Paths. Sep 4 17:29:25.732389 systemd[1540]: Reached target timers.target - Timers. Sep 4 17:29:25.733668 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:29:25.743908 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:29:25.743984 systemd[1540]: Reached target sockets.target - Sockets. Sep 4 17:29:25.743997 systemd[1540]: Reached target basic.target - Basic System. Sep 4 17:29:25.744038 systemd[1540]: Reached target default.target - Main User Target. Sep 4 17:29:25.744065 systemd[1540]: Startup finished in 104ms. Sep 4 17:29:25.744442 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:29:25.745764 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:29:25.814040 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:42784.service - OpenSSH per-connection server daemon (10.0.0.1:42784). Sep 4 17:29:25.851493 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 42784 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:25.853167 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:25.857497 systemd-logind[1420]: New session 2 of user core. Sep 4 17:29:25.871563 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:29:25.924571 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:25.934805 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:42784.service: Deactivated successfully. Sep 4 17:29:25.936271 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:29:25.938546 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:29:25.946675 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:42794.service - OpenSSH per-connection server daemon (10.0.0.1:42794). Sep 4 17:29:25.947493 systemd-logind[1420]: Removed session 2. Sep 4 17:29:25.975764 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 42794 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:25.977514 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:25.981364 systemd-logind[1420]: New session 3 of user core. Sep 4 17:29:25.997574 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:29:26.046551 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:26.056274 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:42794.service: Deactivated successfully. Sep 4 17:29:26.057903 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:29:26.059158 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:29:26.060397 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:42800.service - OpenSSH per-connection server daemon (10.0.0.1:42800). Sep 4 17:29:26.061201 systemd-logind[1420]: Removed session 3. Sep 4 17:29:26.094538 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 42800 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:26.095883 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:26.100010 systemd-logind[1420]: New session 4 of user core. Sep 4 17:29:26.111534 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:29:26.164578 sshd[1565]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:26.173819 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:42800.service: Deactivated successfully. Sep 4 17:29:26.175402 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:29:26.176752 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:29:26.178029 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:42810.service - OpenSSH per-connection server daemon (10.0.0.1:42810). Sep 4 17:29:26.178827 systemd-logind[1420]: Removed session 4. Sep 4 17:29:26.211383 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 42810 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:26.212638 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:26.216676 systemd-logind[1420]: New session 5 of user core. Sep 4 17:29:26.230539 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:29:26.298427 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:29:26.298685 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:26.316154 sudo[1575]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:26.318110 sshd[1572]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:26.329024 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:42810.service: Deactivated successfully. Sep 4 17:29:26.331005 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:29:26.332505 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:29:26.343745 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:42820.service - OpenSSH per-connection server daemon (10.0.0.1:42820). Sep 4 17:29:26.344972 systemd-logind[1420]: Removed session 5. Sep 4 17:29:26.374036 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 42820 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:26.375461 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:26.379448 systemd-logind[1420]: New session 6 of user core. Sep 4 17:29:26.395537 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:29:26.446715 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:29:26.446960 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:26.450172 sudo[1584]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:26.455199 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:29:26.455466 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:26.473622 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:26.475055 auditctl[1587]: No rules Sep 4 17:29:26.475373 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:29:26.477379 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:26.479576 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:26.505172 augenrules[1605]: No rules Sep 4 17:29:26.507440 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:26.508881 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:26.510501 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:26.523904 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:42820.service: Deactivated successfully. Sep 4 17:29:26.525773 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:29:26.527311 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:29:26.536762 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Sep 4 17:29:26.537856 systemd-logind[1420]: Removed session 6. Sep 4 17:29:26.565950 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:29:26.567290 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:26.571923 systemd-logind[1420]: New session 7 of user core. Sep 4 17:29:26.577549 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:29:26.630814 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:29:26.631092 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:26.750641 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:29:26.750839 (dockerd)[1626]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:29:27.009981 dockerd[1626]: time="2024-09-04T17:29:27.009706615Z" level=info msg="Starting up" Sep 4 17:29:27.096619 dockerd[1626]: time="2024-09-04T17:29:27.096454095Z" level=info msg="Loading containers: start." Sep 4 17:29:27.182360 kernel: Initializing XFRM netlink socket Sep 4 17:29:27.244074 systemd-networkd[1375]: docker0: Link UP Sep 4 17:29:27.262548 dockerd[1626]: time="2024-09-04T17:29:27.261804775Z" level=info msg="Loading containers: done." Sep 4 17:29:27.315265 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2803322936-merged.mount: Deactivated successfully. Sep 4 17:29:27.316702 dockerd[1626]: time="2024-09-04T17:29:27.316666935Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:29:27.316958 dockerd[1626]: time="2024-09-04T17:29:27.316935695Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:29:27.317126 dockerd[1626]: time="2024-09-04T17:29:27.317110055Z" level=info msg="Daemon has completed initialization" Sep 4 17:29:27.342592 dockerd[1626]: time="2024-09-04T17:29:27.342532495Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:29:27.343622 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:29:27.983073 containerd[1443]: time="2024-09-04T17:29:27.982940215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:29:28.712923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651485264.mount: Deactivated successfully. Sep 4 17:29:29.957696 containerd[1443]: time="2024-09-04T17:29:29.957645775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:29.958719 containerd[1443]: time="2024-09-04T17:29:29.958693095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=29943742" Sep 4 17:29:29.959381 containerd[1443]: time="2024-09-04T17:29:29.958967575Z" level=info msg="ImageCreate event name:\"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:29.962271 containerd[1443]: time="2024-09-04T17:29:29.962234975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:29.964199 containerd[1443]: time="2024-09-04T17:29:29.963993975Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"29940540\" in 1.98100848s" Sep 4 17:29:29.964199 containerd[1443]: time="2024-09-04T17:29:29.964029775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\"" Sep 4 17:29:29.984778 containerd[1443]: time="2024-09-04T17:29:29.984730575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:29:31.233832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:31.240643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:31.339020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:31.345850 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:31.396173 kubelet[1842]: E0904 17:29:31.396060 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:31.399191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:31.399380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:31.654380 containerd[1443]: time="2024-09-04T17:29:31.654251095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:31.655858 containerd[1443]: time="2024-09-04T17:29:31.655609455Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=26881134" Sep 4 17:29:31.656644 containerd[1443]: time="2024-09-04T17:29:31.656613535Z" level=info msg="ImageCreate event name:\"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:31.659990 containerd[1443]: time="2024-09-04T17:29:31.659959775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:31.661188 containerd[1443]: time="2024-09-04T17:29:31.661162095Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"28368399\" in 1.67639168s" Sep 4 17:29:31.661248 containerd[1443]: time="2024-09-04T17:29:31.661194655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\"" Sep 4 17:29:31.680245 containerd[1443]: time="2024-09-04T17:29:31.680146415Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:29:32.677789 containerd[1443]: time="2024-09-04T17:29:32.677734615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:32.678739 containerd[1443]: time="2024-09-04T17:29:32.678701935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=16154065" Sep 4 17:29:32.679430 containerd[1443]: time="2024-09-04T17:29:32.679399975Z" level=info msg="ImageCreate event name:\"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:32.682402 containerd[1443]: time="2024-09-04T17:29:32.682366935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:32.683655 containerd[1443]: time="2024-09-04T17:29:32.683605695Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"17641348\" in 1.00341688s" Sep 4 17:29:32.683655 containerd[1443]: time="2024-09-04T17:29:32.683642055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\"" Sep 4 17:29:32.703113 containerd[1443]: time="2024-09-04T17:29:32.703065535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:29:34.849811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17267581.mount: Deactivated successfully. Sep 4 17:29:35.067770 containerd[1443]: time="2024-09-04T17:29:35.067709855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:35.068726 containerd[1443]: time="2024-09-04T17:29:35.068639415Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=25646049" Sep 4 17:29:35.069484 containerd[1443]: time="2024-09-04T17:29:35.069436375Z" level=info msg="ImageCreate event name:\"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:35.071586 containerd[1443]: time="2024-09-04T17:29:35.071541095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:35.072423 containerd[1443]: time="2024-09-04T17:29:35.072380095Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"25645066\" in 2.36927072s" Sep 4 17:29:35.072423 containerd[1443]: time="2024-09-04T17:29:35.072417135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\"" Sep 4 17:29:35.092186 containerd[1443]: time="2024-09-04T17:29:35.092136215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:29:35.732115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176206717.mount: Deactivated successfully. Sep 4 17:29:36.540443 containerd[1443]: time="2024-09-04T17:29:36.540362615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:36.540863 containerd[1443]: time="2024-09-04T17:29:36.540763935Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Sep 4 17:29:36.541674 containerd[1443]: time="2024-09-04T17:29:36.541644535Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:36.547374 containerd[1443]: time="2024-09-04T17:29:36.547305575Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.4551278s" Sep 4 17:29:36.547374 containerd[1443]: time="2024-09-04T17:29:36.547371655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:29:36.548507 containerd[1443]: time="2024-09-04T17:29:36.548460055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:36.569171 containerd[1443]: time="2024-09-04T17:29:36.569129135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:29:37.072124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891541424.mount: Deactivated successfully. Sep 4 17:29:37.078366 containerd[1443]: time="2024-09-04T17:29:37.078293495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:37.079398 containerd[1443]: time="2024-09-04T17:29:37.079362255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:29:37.082190 containerd[1443]: time="2024-09-04T17:29:37.082126055Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:37.085307 containerd[1443]: time="2024-09-04T17:29:37.084290655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:37.085307 containerd[1443]: time="2024-09-04T17:29:37.085197535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 516.02876ms" Sep 4 17:29:37.085307 containerd[1443]: time="2024-09-04T17:29:37.085224815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:29:37.104847 containerd[1443]: time="2024-09-04T17:29:37.104804575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:29:37.807714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237407320.mount: Deactivated successfully. Sep 4 17:29:39.665901 containerd[1443]: time="2024-09-04T17:29:39.665836135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:39.667726 containerd[1443]: time="2024-09-04T17:29:39.667685215Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Sep 4 17:29:39.668797 containerd[1443]: time="2024-09-04T17:29:39.668758735Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:39.672235 containerd[1443]: time="2024-09-04T17:29:39.672156255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:39.673863 containerd[1443]: time="2024-09-04T17:29:39.673820495Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.5689726s" Sep 4 17:29:39.673905 containerd[1443]: time="2024-09-04T17:29:39.673860735Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Sep 4 17:29:41.649934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:29:41.659723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:41.780994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:41.785400 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:41.839351 kubelet[2069]: E0904 17:29:41.837791 2069 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:41.841179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:41.841311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:43.768624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:43.782583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:43.798708 systemd[1]: Reloading requested from client PID 2084 ('systemctl') (unit session-7.scope)... Sep 4 17:29:43.798723 systemd[1]: Reloading... Sep 4 17:29:43.874362 zram_generator::config[2121]: No configuration found. Sep 4 17:29:44.035103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:44.089304 systemd[1]: Reloading finished in 290 ms. Sep 4 17:29:44.128584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:29:44.128645 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:29:44.129426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:44.132277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:44.234268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:44.238259 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:29:44.277577 kubelet[2167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:44.277577 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:29:44.277577 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:44.278420 kubelet[2167]: I0904 17:29:44.278377 2167 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:29:44.786500 kubelet[2167]: I0904 17:29:44.786442 2167 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:29:44.786500 kubelet[2167]: I0904 17:29:44.786473 2167 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:29:44.786704 kubelet[2167]: I0904 17:29:44.786690 2167 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:29:44.830928 kubelet[2167]: I0904 17:29:44.830781 2167 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:29:44.830928 kubelet[2167]: E0904 17:29:44.830874 2167 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.837922 kubelet[2167]: I0904 17:29:44.837888 2167 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:29:44.838616 kubelet[2167]: I0904 17:29:44.838580 2167 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:29:44.838777 kubelet[2167]: I0904 17:29:44.838611 2167 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:29:44.838926 kubelet[2167]: I0904 17:29:44.838915 2167 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:29:44.838961 kubelet[2167]: I0904 17:29:44.838927 2167 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:29:44.839314 kubelet[2167]: I0904 17:29:44.839291 2167 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:44.841256 kubelet[2167]: I0904 17:29:44.841173 2167 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:29:44.841256 kubelet[2167]: I0904 17:29:44.841200 2167 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:29:44.841849 kubelet[2167]: I0904 17:29:44.841622 2167 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:29:44.841849 kubelet[2167]: W0904 17:29:44.841711 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.841849 kubelet[2167]: E0904 17:29:44.841759 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.842056 kubelet[2167]: I0904 17:29:44.841885 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:29:44.842508 kubelet[2167]: W0904 17:29:44.842402 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.842508 kubelet[2167]: E0904 17:29:44.842454 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.843373 kubelet[2167]: I0904 17:29:44.843345 2167 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:29:44.843888 kubelet[2167]: I0904 17:29:44.843856 2167 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:29:44.844036 kubelet[2167]: W0904 17:29:44.844014 2167 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:29:44.858385 kubelet[2167]: I0904 17:29:44.858120 2167 server.go:1264] "Started kubelet" Sep 4 17:29:44.859813 kubelet[2167]: I0904 17:29:44.859723 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:29:44.860889 kubelet[2167]: E0904 17:29:44.860025 2167 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ab802b584d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:29:44.858076375 +0000 UTC m=+0.616952241,LastTimestamp:2024-09-04 17:29:44.858076375 +0000 UTC m=+0.616952241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:29:44.860889 kubelet[2167]: I0904 17:29:44.860712 2167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:29:44.862431 kubelet[2167]: I0904 17:29:44.862391 2167 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:29:44.862954 kubelet[2167]: I0904 17:29:44.862931 2167 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:29:44.863286 kubelet[2167]: E0904 17:29:44.863247 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Sep 4 17:29:44.863368 kubelet[2167]: I0904 17:29:44.863313 2167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:29:44.863575 kubelet[2167]: I0904 17:29:44.863559 2167 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:29:44.865490 kubelet[2167]: I0904 17:29:44.864544 2167 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:29:44.865490 kubelet[2167]: I0904 17:29:44.864655 2167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:29:44.865490 kubelet[2167]: W0904 17:29:44.864936 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.865490 kubelet[2167]: E0904 17:29:44.864980 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.865490 kubelet[2167]: I0904 17:29:44.865227 2167 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:29:44.867807 kubelet[2167]: I0904 17:29:44.867781 2167 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:29:44.868424 kubelet[2167]: I0904 17:29:44.868384 2167 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:29:44.869701 kubelet[2167]: E0904 17:29:44.869678 2167 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:29:44.883761 kubelet[2167]: I0904 17:29:44.883728 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:29:44.883761 kubelet[2167]: I0904 17:29:44.883746 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:29:44.883761 kubelet[2167]: I0904 17:29:44.883764 2167 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:44.891434 kubelet[2167]: I0904 17:29:44.891364 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:29:44.892804 kubelet[2167]: I0904 17:29:44.892774 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:29:44.893128 kubelet[2167]: I0904 17:29:44.892951 2167 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:29:44.893128 kubelet[2167]: I0904 17:29:44.892973 2167 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:29:44.893128 kubelet[2167]: E0904 17:29:44.893105 2167 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:29:44.893693 kubelet[2167]: W0904 17:29:44.893645 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.893754 kubelet[2167]: E0904 17:29:44.893699 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:44.963930 kubelet[2167]: I0904 17:29:44.963885 2167 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:44.964428 kubelet[2167]: E0904 17:29:44.964391 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 4 17:29:44.973011 kubelet[2167]: I0904 17:29:44.972984 2167 policy_none.go:49] "None policy: Start" Sep 4 17:29:44.973682 kubelet[2167]: I0904 17:29:44.973665 2167 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:29:44.973721 kubelet[2167]: I0904 17:29:44.973692 2167 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:29:44.993294 kubelet[2167]: E0904 17:29:44.993228 2167 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:29:45.021560 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:29:45.035390 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:29:45.038352 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:29:45.048210 kubelet[2167]: I0904 17:29:45.048164 2167 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:29:45.048428 kubelet[2167]: I0904 17:29:45.048383 2167 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:29:45.049038 kubelet[2167]: I0904 17:29:45.048502 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:29:45.049572 kubelet[2167]: E0904 17:29:45.049542 2167 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:29:45.063698 kubelet[2167]: E0904 17:29:45.063661 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Sep 4 17:29:45.166160 kubelet[2167]: I0904 17:29:45.166138 2167 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:45.166505 kubelet[2167]: E0904 17:29:45.166480 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 4 17:29:45.193705 kubelet[2167]: I0904 17:29:45.193599 2167 topology_manager.go:215] "Topology Admit Handler" podUID="84ab2f9beaedf2b1095edb78f87017af" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:29:45.194707 kubelet[2167]: I0904 17:29:45.194675 2167 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:29:45.195386 kubelet[2167]: I0904 17:29:45.195354 2167 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:29:45.200930 systemd[1]: Created slice kubepods-burstable-pod84ab2f9beaedf2b1095edb78f87017af.slice - libcontainer container kubepods-burstable-pod84ab2f9beaedf2b1095edb78f87017af.slice. Sep 4 17:29:45.214553 systemd[1]: Created slice kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice - libcontainer container kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice. Sep 4 17:29:45.228846 systemd[1]: Created slice kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice - libcontainer container kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice. Sep 4 17:29:45.266696 kubelet[2167]: I0904 17:29:45.266655 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:45.266696 kubelet[2167]: I0904 17:29:45.266690 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84ab2f9beaedf2b1095edb78f87017af-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84ab2f9beaedf2b1095edb78f87017af\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:45.266842 kubelet[2167]: I0904 17:29:45.266711 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84ab2f9beaedf2b1095edb78f87017af-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84ab2f9beaedf2b1095edb78f87017af\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:45.266842 kubelet[2167]: I0904 17:29:45.266728 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:45.266842 kubelet[2167]: I0904 17:29:45.266745 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:45.266842 kubelet[2167]: I0904 17:29:45.266780 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:45.266842 kubelet[2167]: I0904 17:29:45.266799 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84ab2f9beaedf2b1095edb78f87017af-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84ab2f9beaedf2b1095edb78f87017af\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:45.266943 kubelet[2167]: I0904 17:29:45.266816 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:45.266943 kubelet[2167]: I0904 17:29:45.266864 2167 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:45.464542 kubelet[2167]: E0904 17:29:45.464427 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Sep 4 17:29:45.512905 kubelet[2167]: E0904 17:29:45.512861 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:45.513514 containerd[1443]: time="2024-09-04T17:29:45.513472815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84ab2f9beaedf2b1095edb78f87017af,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:45.527807 kubelet[2167]: E0904 17:29:45.527750 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:45.528179 containerd[1443]: time="2024-09-04T17:29:45.528136575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:45.531035 kubelet[2167]: E0904 17:29:45.530875 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:45.531529 containerd[1443]: time="2024-09-04T17:29:45.531253015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:45.568294 kubelet[2167]: I0904 17:29:45.568262 2167 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:45.568663 kubelet[2167]: E0904 17:29:45.568623 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 4 17:29:45.710580 kubelet[2167]: W0904 17:29:45.710500 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:45.710580 kubelet[2167]: E0904 17:29:45.710564 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:45.717071 kubelet[2167]: E0904 17:29:45.716913 2167 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ab802b584d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:29:44.858076375 +0000 UTC m=+0.616952241,LastTimestamp:2024-09-04 17:29:44.858076375 +0000 UTC m=+0.616952241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:29:46.075660 kubelet[2167]: W0904 17:29:46.075534 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:46.075660 kubelet[2167]: E0904 17:29:46.075580 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:46.094154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504569761.mount: Deactivated successfully. Sep 4 17:29:46.100415 containerd[1443]: time="2024-09-04T17:29:46.100362935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:46.101356 containerd[1443]: time="2024-09-04T17:29:46.101282975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:46.102094 containerd[1443]: time="2024-09-04T17:29:46.102072575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:46.102350 containerd[1443]: time="2024-09-04T17:29:46.102311815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:29:46.102977 containerd[1443]: time="2024-09-04T17:29:46.102785975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:29:46.103720 containerd[1443]: time="2024-09-04T17:29:46.103555135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:29:46.104438 containerd[1443]: time="2024-09-04T17:29:46.104101335Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:46.108474 containerd[1443]: time="2024-09-04T17:29:46.108424855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:46.109396 containerd[1443]: time="2024-09-04T17:29:46.109348775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.00532ms" Sep 4 17:29:46.110030 containerd[1443]: time="2024-09-04T17:29:46.109999855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.431ms" Sep 4 17:29:46.112187 containerd[1443]: time="2024-09-04T17:29:46.112154335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.93412ms" Sep 4 17:29:46.125819 kubelet[2167]: W0904 17:29:46.123281 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:46.125819 kubelet[2167]: E0904 17:29:46.123389 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:46.252089 containerd[1443]: time="2024-09-04T17:29:46.251983695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:46.252089 containerd[1443]: time="2024-09-04T17:29:46.252056735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:46.252276 containerd[1443]: time="2024-09-04T17:29:46.252072335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:46.252276 containerd[1443]: time="2024-09-04T17:29:46.252085335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.253060215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.253100375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.253118055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.253132015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.252467375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.252507095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.252520255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:46.253590 containerd[1443]: time="2024-09-04T17:29:46.252529495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:46.265820 kubelet[2167]: E0904 17:29:46.265781 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="1.6s" Sep 4 17:29:46.273502 systemd[1]: Started cri-containerd-a3ab260cf1eb1626a935aadaf37bb512735191967bd762627aec1982654b15fd.scope - libcontainer container a3ab260cf1eb1626a935aadaf37bb512735191967bd762627aec1982654b15fd. Sep 4 17:29:46.274573 systemd[1]: Started cri-containerd-e07122ef77338de32e7ed5500c05267d83b0caac9a5859081a422a11e100ae9c.scope - libcontainer container e07122ef77338de32e7ed5500c05267d83b0caac9a5859081a422a11e100ae9c. Sep 4 17:29:46.276474 systemd[1]: Started cri-containerd-feb5f78f641eca4021f7db638c6bf6d3da6094ed85f9330fc7b4e6701bc8ed1c.scope - libcontainer container feb5f78f641eca4021f7db638c6bf6d3da6094ed85f9330fc7b4e6701bc8ed1c. Sep 4 17:29:46.305378 containerd[1443]: time="2024-09-04T17:29:46.305134375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84ab2f9beaedf2b1095edb78f87017af,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3ab260cf1eb1626a935aadaf37bb512735191967bd762627aec1982654b15fd\"" Sep 4 17:29:46.308086 kubelet[2167]: E0904 17:29:46.308043 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:46.310776 containerd[1443]: time="2024-09-04T17:29:46.310733175Z" level=info msg="CreateContainer within sandbox \"a3ab260cf1eb1626a935aadaf37bb512735191967bd762627aec1982654b15fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:29:46.316479 containerd[1443]: time="2024-09-04T17:29:46.316398935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"feb5f78f641eca4021f7db638c6bf6d3da6094ed85f9330fc7b4e6701bc8ed1c\"" Sep 4 17:29:46.317418 kubelet[2167]: E0904 17:29:46.317383 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:46.320742 containerd[1443]: time="2024-09-04T17:29:46.320613415Z" level=info msg="CreateContainer within sandbox \"feb5f78f641eca4021f7db638c6bf6d3da6094ed85f9330fc7b4e6701bc8ed1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:29:46.322747 containerd[1443]: time="2024-09-04T17:29:46.322701615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07122ef77338de32e7ed5500c05267d83b0caac9a5859081a422a11e100ae9c\"" Sep 4 17:29:46.323489 kubelet[2167]: E0904 17:29:46.323381 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:46.325080 containerd[1443]: time="2024-09-04T17:29:46.325051775Z" level=info msg="CreateContainer within sandbox \"e07122ef77338de32e7ed5500c05267d83b0caac9a5859081a422a11e100ae9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:29:46.335480 containerd[1443]: time="2024-09-04T17:29:46.335292015Z" level=info msg="CreateContainer within sandbox \"a3ab260cf1eb1626a935aadaf37bb512735191967bd762627aec1982654b15fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"788d04ce12f18125c45b8040a474790c56a14171952af9c3c57f77fdb1f87989\"" Sep 4 17:29:46.336128 containerd[1443]: time="2024-09-04T17:29:46.336092175Z" level=info msg="StartContainer for \"788d04ce12f18125c45b8040a474790c56a14171952af9c3c57f77fdb1f87989\"" Sep 4 17:29:46.337296 containerd[1443]: time="2024-09-04T17:29:46.337206975Z" level=info msg="CreateContainer within sandbox \"feb5f78f641eca4021f7db638c6bf6d3da6094ed85f9330fc7b4e6701bc8ed1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a457f8fffe358efc75a41b649c7764eb2cfaf01d098d33182c3a644b1c2ebcff\"" Sep 4 17:29:46.337597 containerd[1443]: time="2024-09-04T17:29:46.337573855Z" level=info msg="StartContainer for \"a457f8fffe358efc75a41b649c7764eb2cfaf01d098d33182c3a644b1c2ebcff\"" Sep 4 17:29:46.343982 containerd[1443]: time="2024-09-04T17:29:46.343528695Z" level=info msg="CreateContainer within sandbox \"e07122ef77338de32e7ed5500c05267d83b0caac9a5859081a422a11e100ae9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5834c821996060c0dd53cfe1358a76b5e89cb1637436c8ca20bba381f407196a\"" Sep 4 17:29:46.344067 containerd[1443]: time="2024-09-04T17:29:46.343985695Z" level=info msg="StartContainer for \"5834c821996060c0dd53cfe1358a76b5e89cb1637436c8ca20bba381f407196a\"" Sep 4 17:29:46.363124 systemd[1]: Started cri-containerd-788d04ce12f18125c45b8040a474790c56a14171952af9c3c57f77fdb1f87989.scope - libcontainer container 788d04ce12f18125c45b8040a474790c56a14171952af9c3c57f77fdb1f87989. Sep 4 17:29:46.365946 systemd[1]: Started cri-containerd-a457f8fffe358efc75a41b649c7764eb2cfaf01d098d33182c3a644b1c2ebcff.scope - libcontainer container a457f8fffe358efc75a41b649c7764eb2cfaf01d098d33182c3a644b1c2ebcff. Sep 4 17:29:46.370622 kubelet[2167]: I0904 17:29:46.370389 2167 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:46.372082 kubelet[2167]: E0904 17:29:46.372048 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 4 17:29:46.376493 systemd[1]: Started cri-containerd-5834c821996060c0dd53cfe1358a76b5e89cb1637436c8ca20bba381f407196a.scope - libcontainer container 5834c821996060c0dd53cfe1358a76b5e89cb1637436c8ca20bba381f407196a. Sep 4 17:29:46.407593 containerd[1443]: time="2024-09-04T17:29:46.407539855Z" level=info msg="StartContainer for \"788d04ce12f18125c45b8040a474790c56a14171952af9c3c57f77fdb1f87989\" returns successfully" Sep 4 17:29:46.421872 containerd[1443]: time="2024-09-04T17:29:46.421757415Z" level=info msg="StartContainer for \"5834c821996060c0dd53cfe1358a76b5e89cb1637436c8ca20bba381f407196a\" returns successfully" Sep 4 17:29:46.422120 containerd[1443]: time="2024-09-04T17:29:46.421977895Z" level=info msg="StartContainer for \"a457f8fffe358efc75a41b649c7764eb2cfaf01d098d33182c3a644b1c2ebcff\" returns successfully" Sep 4 17:29:46.448982 kubelet[2167]: W0904 17:29:46.448899 2167 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:46.448982 kubelet[2167]: E0904 17:29:46.448943 2167 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Sep 4 17:29:46.902252 kubelet[2167]: E0904 17:29:46.902176 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:46.908663 kubelet[2167]: E0904 17:29:46.908553 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:46.914108 kubelet[2167]: E0904 17:29:46.913986 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:47.912971 kubelet[2167]: E0904 17:29:47.912937 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:47.974808 kubelet[2167]: I0904 17:29:47.974776 2167 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:48.294452 kubelet[2167]: E0904 17:29:48.294342 2167 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:29:48.388749 kubelet[2167]: I0904 17:29:48.388697 2167 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:29:48.395979 kubelet[2167]: E0904 17:29:48.395941 2167 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:29:48.496547 kubelet[2167]: E0904 17:29:48.496492 2167 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:29:48.597384 kubelet[2167]: E0904 17:29:48.597024 2167 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:29:48.697767 kubelet[2167]: E0904 17:29:48.697726 2167 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:29:48.845563 kubelet[2167]: I0904 17:29:48.844427 2167 apiserver.go:52] "Watching apiserver" Sep 4 17:29:48.863797 kubelet[2167]: I0904 17:29:48.863698 2167 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:29:48.921904 kubelet[2167]: E0904 17:29:48.921870 2167 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:48.922346 kubelet[2167]: E0904 17:29:48.922318 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:50.333084 kubelet[2167]: E0904 17:29:50.333028 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:50.697998 systemd[1]: Reloading requested from client PID 2446 ('systemctl') (unit session-7.scope)... Sep 4 17:29:50.698012 systemd[1]: Reloading... Sep 4 17:29:50.763779 zram_generator::config[2483]: No configuration found. Sep 4 17:29:50.857267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:50.915526 kubelet[2167]: E0904 17:29:50.915489 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:50.922852 systemd[1]: Reloading finished in 224 ms. Sep 4 17:29:50.967998 kubelet[2167]: I0904 17:29:50.967804 2167 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:29:50.968099 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:50.981630 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:29:50.981891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:50.997695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:51.099838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:51.103952 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:29:51.155670 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:51.155670 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:29:51.155670 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:51.156001 kubelet[2525]: I0904 17:29:51.155722 2525 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:29:51.160792 kubelet[2525]: I0904 17:29:51.160751 2525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:29:51.160792 kubelet[2525]: I0904 17:29:51.160781 2525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:29:51.160979 kubelet[2525]: I0904 17:29:51.160963 2525 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:29:51.162280 kubelet[2525]: I0904 17:29:51.162257 2525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:29:51.163837 kubelet[2525]: I0904 17:29:51.163812 2525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:29:51.171125 kubelet[2525]: I0904 17:29:51.170974 2525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:29:51.171508 kubelet[2525]: I0904 17:29:51.171477 2525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:29:51.171780 kubelet[2525]: I0904 17:29:51.171578 2525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:29:51.172004 kubelet[2525]: I0904 17:29:51.171985 2525 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:29:51.172058 kubelet[2525]: I0904 17:29:51.172049 2525 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:29:51.172243 kubelet[2525]: I0904 17:29:51.172132 2525 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:51.172368 kubelet[2525]: I0904 17:29:51.172355 2525 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:29:51.172565 kubelet[2525]: I0904 17:29:51.172547 2525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:29:51.172776 kubelet[2525]: I0904 17:29:51.172760 2525 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:29:51.173024 kubelet[2525]: I0904 17:29:51.172968 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:29:51.178354 kubelet[2525]: I0904 17:29:51.178309 2525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:29:51.178603 kubelet[2525]: I0904 17:29:51.178573 2525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:29:51.183417 kubelet[2525]: I0904 17:29:51.181812 2525 server.go:1264] "Started kubelet" Sep 4 17:29:51.183417 kubelet[2525]: I0904 17:29:51.182961 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:29:51.183417 kubelet[2525]: I0904 17:29:51.183225 2525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:29:51.183417 kubelet[2525]: I0904 17:29:51.183265 2525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:29:51.184146 kubelet[2525]: I0904 17:29:51.184119 2525 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:29:51.185993 kubelet[2525]: E0904 17:29:51.185966 2525 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:29:51.186298 kubelet[2525]: I0904 17:29:51.186283 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:29:51.187509 kubelet[2525]: E0904 17:29:51.187481 2525 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:29:51.187620 kubelet[2525]: I0904 17:29:51.187609 2525 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:29:51.187801 kubelet[2525]: I0904 17:29:51.187789 2525 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:29:51.188039 kubelet[2525]: I0904 17:29:51.188026 2525 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:29:51.196747 kubelet[2525]: I0904 17:29:51.196713 2525 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:29:51.197305 kubelet[2525]: I0904 17:29:51.197263 2525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:29:51.201832 kubelet[2525]: I0904 17:29:51.201790 2525 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:29:51.204425 kubelet[2525]: I0904 17:29:51.202861 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:29:51.204425 kubelet[2525]: I0904 17:29:51.203914 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:29:51.204425 kubelet[2525]: I0904 17:29:51.203946 2525 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:29:51.204425 kubelet[2525]: I0904 17:29:51.203988 2525 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:29:51.204425 kubelet[2525]: E0904 17:29:51.204034 2525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:29:51.236437 kubelet[2525]: I0904 17:29:51.235189 2525 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:29:51.236437 kubelet[2525]: I0904 17:29:51.236408 2525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:29:51.236437 kubelet[2525]: I0904 17:29:51.236440 2525 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:51.236717 kubelet[2525]: I0904 17:29:51.236690 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:29:51.236751 kubelet[2525]: I0904 17:29:51.236712 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:29:51.236751 kubelet[2525]: I0904 17:29:51.236750 2525 policy_none.go:49] "None policy: Start" Sep 4 17:29:51.237536 kubelet[2525]: I0904 17:29:51.237501 2525 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:29:51.237536 kubelet[2525]: I0904 17:29:51.237528 2525 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:29:51.237674 kubelet[2525]: I0904 17:29:51.237658 2525 state_mem.go:75] "Updated machine memory state" Sep 4 17:29:51.241985 kubelet[2525]: I0904 17:29:51.241950 2525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:29:51.242213 kubelet[2525]: I0904 17:29:51.242113 2525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:29:51.242405 kubelet[2525]: I0904 17:29:51.242309 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:29:51.291774 kubelet[2525]: I0904 17:29:51.291729 2525 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:51.299483 kubelet[2525]: I0904 17:29:51.299416 2525 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:29:51.299624 kubelet[2525]: I0904 17:29:51.299607 2525 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:29:51.306535 kubelet[2525]: I0904 17:29:51.306489 2525 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:29:51.306724 kubelet[2525]: I0904 17:29:51.306701 2525 topology_manager.go:215] "Topology Admit Handler" podUID="84ab2f9beaedf2b1095edb78f87017af" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:29:51.306804 kubelet[2525]: I0904 17:29:51.306788 2525 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:29:51.317092 kubelet[2525]: E0904 17:29:51.316762 2525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:51.489018 kubelet[2525]: I0904 17:29:51.488828 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84ab2f9beaedf2b1095edb78f87017af-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84ab2f9beaedf2b1095edb78f87017af\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:51.489018 kubelet[2525]: I0904 17:29:51.488870 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84ab2f9beaedf2b1095edb78f87017af-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84ab2f9beaedf2b1095edb78f87017af\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:51.489018 kubelet[2525]: I0904 17:29:51.488896 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:51.489018 kubelet[2525]: I0904 17:29:51.488914 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:51.489018 kubelet[2525]: I0904 17:29:51.488936 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:51.489712 kubelet[2525]: I0904 17:29:51.488952 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84ab2f9beaedf2b1095edb78f87017af-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84ab2f9beaedf2b1095edb78f87017af\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:51.489712 kubelet[2525]: I0904 17:29:51.488969 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:51.489712 kubelet[2525]: I0904 17:29:51.489128 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:51.489712 kubelet[2525]: I0904 17:29:51.489145 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:51.616436 kubelet[2525]: E0904 17:29:51.616382 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.616905 kubelet[2525]: E0904 17:29:51.616831 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.617214 kubelet[2525]: E0904 17:29:51.617194 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.741843 sudo[2562]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:29:51.742100 sudo[2562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 4 17:29:52.174157 kubelet[2525]: I0904 17:29:52.174038 2525 apiserver.go:52] "Watching apiserver" Sep 4 17:29:52.177677 sudo[2562]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:52.188884 kubelet[2525]: I0904 17:29:52.188663 2525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:29:52.220230 kubelet[2525]: E0904 17:29:52.220179 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:52.220977 kubelet[2525]: E0904 17:29:52.220938 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:52.232237 kubelet[2525]: E0904 17:29:52.231496 2525 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:52.232237 kubelet[2525]: E0904 17:29:52.231956 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:52.245099 kubelet[2525]: I0904 17:29:52.245024 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.24498452 podStartE2EDuration="1.24498452s" podCreationTimestamp="2024-09-04 17:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:52.243789034 +0000 UTC m=+1.136203578" watchObservedRunningTime="2024-09-04 17:29:52.24498452 +0000 UTC m=+1.137399064" Sep 4 17:29:52.253967 kubelet[2525]: I0904 17:29:52.253241 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2532238 podStartE2EDuration="1.2532238s" podCreationTimestamp="2024-09-04 17:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:52.253068119 +0000 UTC m=+1.145482663" watchObservedRunningTime="2024-09-04 17:29:52.2532238 +0000 UTC m=+1.145638344" Sep 4 17:29:52.270616 kubelet[2525]: I0904 17:29:52.270554 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.270537166 podStartE2EDuration="2.270537166s" podCreationTimestamp="2024-09-04 17:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:52.261547641 +0000 UTC m=+1.153962185" watchObservedRunningTime="2024-09-04 17:29:52.270537166 +0000 UTC m=+1.162951710" Sep 4 17:29:53.221565 kubelet[2525]: E0904 17:29:53.221515 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:54.532437 sudo[1616]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:54.534127 sshd[1613]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:54.539704 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:42826.service: Deactivated successfully. Sep 4 17:29:54.542872 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:29:54.544417 systemd[1]: session-7.scope: Consumed 7.176s CPU time, 140.4M memory peak, 0B memory swap peak. Sep 4 17:29:54.545168 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:29:54.546677 systemd-logind[1420]: Removed session 7. Sep 4 17:29:57.261691 kubelet[2525]: E0904 17:29:57.261590 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:58.228598 kubelet[2525]: E0904 17:29:58.228242 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:59.000913 kubelet[2525]: E0904 17:29:59.000872 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:59.230393 kubelet[2525]: E0904 17:29:59.230315 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:59.903889 kubelet[2525]: E0904 17:29:59.903559 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:00.232203 kubelet[2525]: E0904 17:30:00.232091 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:03.620547 update_engine[1425]: I0904 17:30:03.620489 1425 update_attempter.cc:509] Updating boot flags... Sep 4 17:30:03.667380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2612) Sep 4 17:30:06.985088 kubelet[2525]: I0904 17:30:06.985037 2525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:30:06.985600 containerd[1443]: time="2024-09-04T17:30:06.985435821Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:30:06.985783 kubelet[2525]: I0904 17:30:06.985638 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:30:07.630386 kubelet[2525]: I0904 17:30:07.627966 2525 topology_manager.go:215] "Topology Admit Handler" podUID="811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b" podNamespace="kube-system" podName="kube-proxy-q945n" Sep 4 17:30:07.640647 kubelet[2525]: I0904 17:30:07.640578 2525 topology_manager.go:215] "Topology Admit Handler" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" podNamespace="kube-system" podName="cilium-4rzst" Sep 4 17:30:07.641210 systemd[1]: Created slice kubepods-besteffort-pod811fdb06_9f7e_4d8e_9ae3_0f2f73699b2b.slice - libcontainer container kubepods-besteffort-pod811fdb06_9f7e_4d8e_9ae3_0f2f73699b2b.slice. Sep 4 17:30:07.659420 systemd[1]: Created slice kubepods-burstable-pod3fb8c2b9_ddc1_47e4_b6b5_8a6c2ab3de7b.slice - libcontainer container kubepods-burstable-pod3fb8c2b9_ddc1_47e4_b6b5_8a6c2ab3de7b.slice. Sep 4 17:30:07.793621 kubelet[2525]: I0904 17:30:07.793502 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b-xtables-lock\") pod \"kube-proxy-q945n\" (UID: \"811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b\") " pod="kube-system/kube-proxy-q945n" Sep 4 17:30:07.793621 kubelet[2525]: I0904 17:30:07.793550 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-xtables-lock\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793621 kubelet[2525]: I0904 17:30:07.793578 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-kernel\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793621 kubelet[2525]: I0904 17:30:07.793606 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbmq\" (UniqueName: \"kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-kube-api-access-jxbmq\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793621 kubelet[2525]: I0904 17:30:07.793626 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b-lib-modules\") pod \"kube-proxy-q945n\" (UID: \"811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b\") " pod="kube-system/kube-proxy-q945n" Sep 4 17:30:07.793858 kubelet[2525]: I0904 17:30:07.793642 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-lib-modules\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793858 kubelet[2525]: I0904 17:30:07.793659 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-cgroup\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793858 kubelet[2525]: I0904 17:30:07.793673 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cni-path\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793858 kubelet[2525]: I0904 17:30:07.793706 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hostproc\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.793858 kubelet[2525]: I0904 17:30:07.793722 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-clustermesh-secrets\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.797357 kubelet[2525]: I0904 17:30:07.795253 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-config-path\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.797357 kubelet[2525]: I0904 17:30:07.795320 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b-kube-proxy\") pod \"kube-proxy-q945n\" (UID: \"811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b\") " pod="kube-system/kube-proxy-q945n" Sep 4 17:30:07.797357 kubelet[2525]: I0904 17:30:07.795396 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-bpf-maps\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.797357 kubelet[2525]: I0904 17:30:07.795422 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-etc-cni-netd\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.797357 kubelet[2525]: I0904 17:30:07.795444 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hubble-tls\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.797357 kubelet[2525]: I0904 17:30:07.795472 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rmd\" (UniqueName: \"kubernetes.io/projected/811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b-kube-api-access-t8rmd\") pod \"kube-proxy-q945n\" (UID: \"811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b\") " pod="kube-system/kube-proxy-q945n" Sep 4 17:30:07.797626 kubelet[2525]: I0904 17:30:07.795494 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-run\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.797626 kubelet[2525]: I0904 17:30:07.795515 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-net\") pod \"cilium-4rzst\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " pod="kube-system/cilium-4rzst" Sep 4 17:30:07.951468 kubelet[2525]: E0904 17:30:07.950804 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:07.952523 containerd[1443]: time="2024-09-04T17:30:07.952253914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q945n,Uid:811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:07.962714 kubelet[2525]: E0904 17:30:07.962677 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:07.963443 containerd[1443]: time="2024-09-04T17:30:07.963139254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4rzst,Uid:3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:07.994610 containerd[1443]: time="2024-09-04T17:30:07.994523473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:07.994941 containerd[1443]: time="2024-09-04T17:30:07.994578233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:07.994941 containerd[1443]: time="2024-09-04T17:30:07.994597673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:07.994941 containerd[1443]: time="2024-09-04T17:30:07.994612193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:08.004733 containerd[1443]: time="2024-09-04T17:30:08.003998770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:08.004733 containerd[1443]: time="2024-09-04T17:30:08.004194611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:08.004733 containerd[1443]: time="2024-09-04T17:30:08.004218291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:08.004733 containerd[1443]: time="2024-09-04T17:30:08.004232891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:08.019639 systemd[1]: Started cri-containerd-0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc.scope - libcontainer container 0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc. Sep 4 17:30:08.023004 kubelet[2525]: I0904 17:30:08.022581 2525 topology_manager.go:215] "Topology Admit Handler" podUID="058eeb97-8442-450e-868e-6d751e021b15" podNamespace="kube-system" podName="cilium-operator-599987898-v4w54" Sep 4 17:30:08.034492 systemd[1]: Created slice kubepods-besteffort-pod058eeb97_8442_450e_868e_6d751e021b15.slice - libcontainer container kubepods-besteffort-pod058eeb97_8442_450e_868e_6d751e021b15.slice. Sep 4 17:30:08.046606 systemd[1]: Started cri-containerd-36d1ce8a4dc6b64b4ee7b765db846dd088e087ac2337b40d8e3e0d9b7aba0184.scope - libcontainer container 36d1ce8a4dc6b64b4ee7b765db846dd088e087ac2337b40d8e3e0d9b7aba0184. Sep 4 17:30:08.074763 containerd[1443]: time="2024-09-04T17:30:08.074626094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4rzst,Uid:3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\"" Sep 4 17:30:08.076553 kubelet[2525]: E0904 17:30:08.076521 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:08.077738 containerd[1443]: time="2024-09-04T17:30:08.077643140Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:30:08.097812 containerd[1443]: time="2024-09-04T17:30:08.097770655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q945n,Uid:811fdb06-9f7e-4d8e-9ae3-0f2f73699b2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"36d1ce8a4dc6b64b4ee7b765db846dd088e087ac2337b40d8e3e0d9b7aba0184\"" Sep 4 17:30:08.098242 kubelet[2525]: I0904 17:30:08.098207 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/058eeb97-8442-450e-868e-6d751e021b15-cilium-config-path\") pod \"cilium-operator-599987898-v4w54\" (UID: \"058eeb97-8442-450e-868e-6d751e021b15\") " pod="kube-system/cilium-operator-599987898-v4w54" Sep 4 17:30:08.098314 kubelet[2525]: I0904 17:30:08.098248 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvjf\" (UniqueName: \"kubernetes.io/projected/058eeb97-8442-450e-868e-6d751e021b15-kube-api-access-mfvjf\") pod \"cilium-operator-599987898-v4w54\" (UID: \"058eeb97-8442-450e-868e-6d751e021b15\") " pod="kube-system/cilium-operator-599987898-v4w54" Sep 4 17:30:08.098669 kubelet[2525]: E0904 17:30:08.098642 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:08.102998 containerd[1443]: time="2024-09-04T17:30:08.102959144Z" level=info msg="CreateContainer within sandbox \"36d1ce8a4dc6b64b4ee7b765db846dd088e087ac2337b40d8e3e0d9b7aba0184\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:30:08.116792 containerd[1443]: time="2024-09-04T17:30:08.116672288Z" level=info msg="CreateContainer within sandbox \"36d1ce8a4dc6b64b4ee7b765db846dd088e087ac2337b40d8e3e0d9b7aba0184\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20c0eccefbeafa87b54588ea47517640810bbeb2ff24cb2ff0cf235f24b3e1f5\"" Sep 4 17:30:08.117248 containerd[1443]: time="2024-09-04T17:30:08.117220889Z" level=info msg="StartContainer for \"20c0eccefbeafa87b54588ea47517640810bbeb2ff24cb2ff0cf235f24b3e1f5\"" Sep 4 17:30:08.144571 systemd[1]: Started cri-containerd-20c0eccefbeafa87b54588ea47517640810bbeb2ff24cb2ff0cf235f24b3e1f5.scope - libcontainer container 20c0eccefbeafa87b54588ea47517640810bbeb2ff24cb2ff0cf235f24b3e1f5. Sep 4 17:30:08.172175 containerd[1443]: time="2024-09-04T17:30:08.172123505Z" level=info msg="StartContainer for \"20c0eccefbeafa87b54588ea47517640810bbeb2ff24cb2ff0cf235f24b3e1f5\" returns successfully" Sep 4 17:30:08.246750 kubelet[2525]: E0904 17:30:08.245885 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:08.341029 kubelet[2525]: E0904 17:30:08.340978 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:08.342222 containerd[1443]: time="2024-09-04T17:30:08.342182044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-v4w54,Uid:058eeb97-8442-450e-868e-6d751e021b15,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:08.363682 containerd[1443]: time="2024-09-04T17:30:08.363467081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:08.363682 containerd[1443]: time="2024-09-04T17:30:08.363624282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:08.364125 containerd[1443]: time="2024-09-04T17:30:08.364082402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:08.364223 containerd[1443]: time="2024-09-04T17:30:08.364108723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:08.379580 systemd[1]: Started cri-containerd-d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3.scope - libcontainer container d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3. Sep 4 17:30:08.407682 containerd[1443]: time="2024-09-04T17:30:08.407539959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-v4w54,Uid:058eeb97-8442-450e-868e-6d751e021b15,Namespace:kube-system,Attempt:0,} returns sandbox id \"d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3\"" Sep 4 17:30:08.408327 kubelet[2525]: E0904 17:30:08.408304 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:14.339317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094604380.mount: Deactivated successfully. Sep 4 17:30:15.677928 containerd[1443]: time="2024-09-04T17:30:15.677873733Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:15.678557 containerd[1443]: time="2024-09-04T17:30:15.678517214Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651550" Sep 4 17:30:15.679228 containerd[1443]: time="2024-09-04T17:30:15.679204015Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:15.680826 containerd[1443]: time="2024-09-04T17:30:15.680789497Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.603094357s" Sep 4 17:30:15.680962 containerd[1443]: time="2024-09-04T17:30:15.680827257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 17:30:15.684220 containerd[1443]: time="2024-09-04T17:30:15.683936780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:30:15.702921 containerd[1443]: time="2024-09-04T17:30:15.702880881Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:30:15.731507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953493404.mount: Deactivated successfully. Sep 4 17:30:15.733108 containerd[1443]: time="2024-09-04T17:30:15.733067875Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\"" Sep 4 17:30:15.737436 containerd[1443]: time="2024-09-04T17:30:15.737384960Z" level=info msg="StartContainer for \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\"" Sep 4 17:30:15.765582 systemd[1]: Started cri-containerd-5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d.scope - libcontainer container 5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d. Sep 4 17:30:15.788301 containerd[1443]: time="2024-09-04T17:30:15.787148536Z" level=info msg="StartContainer for \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\" returns successfully" Sep 4 17:30:15.824241 systemd[1]: cri-containerd-5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d.scope: Deactivated successfully. Sep 4 17:30:15.951231 containerd[1443]: time="2024-09-04T17:30:15.945714633Z" level=info msg="shim disconnected" id=5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d namespace=k8s.io Sep 4 17:30:15.951231 containerd[1443]: time="2024-09-04T17:30:15.950842359Z" level=warning msg="cleaning up after shim disconnected" id=5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d namespace=k8s.io Sep 4 17:30:15.951231 containerd[1443]: time="2024-09-04T17:30:15.950858759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:16.262286 kubelet[2525]: E0904 17:30:16.262189 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:16.266176 containerd[1443]: time="2024-09-04T17:30:16.266135252Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:30:16.282480 kubelet[2525]: I0904 17:30:16.282420 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q945n" podStartSLOduration=9.282405109 podStartE2EDuration="9.282405109s" podCreationTimestamp="2024-09-04 17:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:08.25445505 +0000 UTC m=+17.146869554" watchObservedRunningTime="2024-09-04 17:30:16.282405109 +0000 UTC m=+25.174819653" Sep 4 17:30:16.300077 containerd[1443]: time="2024-09-04T17:30:16.300022928Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\"" Sep 4 17:30:16.300802 containerd[1443]: time="2024-09-04T17:30:16.300747809Z" level=info msg="StartContainer for \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\"" Sep 4 17:30:16.325497 systemd[1]: Started cri-containerd-b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414.scope - libcontainer container b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414. Sep 4 17:30:16.349795 containerd[1443]: time="2024-09-04T17:30:16.349752780Z" level=info msg="StartContainer for \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\" returns successfully" Sep 4 17:30:16.362903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:30:16.363472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:16.363700 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:30:16.371598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:30:16.371769 systemd[1]: cri-containerd-b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414.scope: Deactivated successfully. Sep 4 17:30:16.389033 containerd[1443]: time="2024-09-04T17:30:16.388969461Z" level=info msg="shim disconnected" id=b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414 namespace=k8s.io Sep 4 17:30:16.389033 containerd[1443]: time="2024-09-04T17:30:16.389023221Z" level=warning msg="cleaning up after shim disconnected" id=b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414 namespace=k8s.io Sep 4 17:30:16.389033 containerd[1443]: time="2024-09-04T17:30:16.389034541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:16.408491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:16.729886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d-rootfs.mount: Deactivated successfully. Sep 4 17:30:17.000792 containerd[1443]: time="2024-09-04T17:30:17.000625262Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:17.001262 containerd[1443]: time="2024-09-04T17:30:17.001222223Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138346" Sep 4 17:30:17.001933 containerd[1443]: time="2024-09-04T17:30:17.001894543Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:17.003433 containerd[1443]: time="2024-09-04T17:30:17.003385185Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.319295205s" Sep 4 17:30:17.003433 containerd[1443]: time="2024-09-04T17:30:17.003423905Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 17:30:17.005799 containerd[1443]: time="2024-09-04T17:30:17.005659427Z" level=info msg="CreateContainer within sandbox \"d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:30:17.017227 containerd[1443]: time="2024-09-04T17:30:17.017176518Z" level=info msg="CreateContainer within sandbox \"d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\"" Sep 4 17:30:17.017673 containerd[1443]: time="2024-09-04T17:30:17.017640199Z" level=info msg="StartContainer for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\"" Sep 4 17:30:17.049499 systemd[1]: Started cri-containerd-5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9.scope - libcontainer container 5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9. Sep 4 17:30:17.076805 containerd[1443]: time="2024-09-04T17:30:17.076754097Z" level=info msg="StartContainer for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" returns successfully" Sep 4 17:30:17.265494 kubelet[2525]: E0904 17:30:17.264976 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:17.268409 containerd[1443]: time="2024-09-04T17:30:17.268238485Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:30:17.269319 kubelet[2525]: E0904 17:30:17.269178 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:17.298539 kubelet[2525]: I0904 17:30:17.298477 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-v4w54" podStartSLOduration=0.703585172 podStartE2EDuration="9.298460955s" podCreationTimestamp="2024-09-04 17:30:08 +0000 UTC" firstStartedPulling="2024-09-04 17:30:08.409194442 +0000 UTC m=+17.301608986" lastFinishedPulling="2024-09-04 17:30:17.004070225 +0000 UTC m=+25.896484769" observedRunningTime="2024-09-04 17:30:17.297301233 +0000 UTC m=+26.189715777" watchObservedRunningTime="2024-09-04 17:30:17.298460955 +0000 UTC m=+26.190875499" Sep 4 17:30:17.330473 containerd[1443]: time="2024-09-04T17:30:17.330393466Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\"" Sep 4 17:30:17.331042 containerd[1443]: time="2024-09-04T17:30:17.330933266Z" level=info msg="StartContainer for \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\"" Sep 4 17:30:17.355492 systemd[1]: Started cri-containerd-67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd.scope - libcontainer container 67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd. Sep 4 17:30:17.381446 containerd[1443]: time="2024-09-04T17:30:17.381407876Z" level=info msg="StartContainer for \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\" returns successfully" Sep 4 17:30:17.394527 systemd[1]: cri-containerd-67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd.scope: Deactivated successfully. Sep 4 17:30:17.440768 containerd[1443]: time="2024-09-04T17:30:17.440705894Z" level=info msg="shim disconnected" id=67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd namespace=k8s.io Sep 4 17:30:17.440768 containerd[1443]: time="2024-09-04T17:30:17.440757854Z" level=warning msg="cleaning up after shim disconnected" id=67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd namespace=k8s.io Sep 4 17:30:17.440768 containerd[1443]: time="2024-09-04T17:30:17.440767254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:18.272250 kubelet[2525]: E0904 17:30:18.271821 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:18.272250 kubelet[2525]: E0904 17:30:18.271861 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:18.274465 containerd[1443]: time="2024-09-04T17:30:18.274424177Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:30:18.290205 containerd[1443]: time="2024-09-04T17:30:18.290106351Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\"" Sep 4 17:30:18.291090 containerd[1443]: time="2024-09-04T17:30:18.291064352Z" level=info msg="StartContainer for \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\"" Sep 4 17:30:18.317515 systemd[1]: Started cri-containerd-fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778.scope - libcontainer container fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778. Sep 4 17:30:18.338136 systemd[1]: cri-containerd-fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778.scope: Deactivated successfully. Sep 4 17:30:18.339198 containerd[1443]: time="2024-09-04T17:30:18.339153636Z" level=info msg="StartContainer for \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\" returns successfully" Sep 4 17:30:18.359430 containerd[1443]: time="2024-09-04T17:30:18.359362135Z" level=info msg="shim disconnected" id=fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778 namespace=k8s.io Sep 4 17:30:18.359430 containerd[1443]: time="2024-09-04T17:30:18.359417215Z" level=warning msg="cleaning up after shim disconnected" id=fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778 namespace=k8s.io Sep 4 17:30:18.359430 containerd[1443]: time="2024-09-04T17:30:18.359427775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:18.729853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778-rootfs.mount: Deactivated successfully. Sep 4 17:30:19.277245 kubelet[2525]: E0904 17:30:19.277209 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:19.280347 containerd[1443]: time="2024-09-04T17:30:19.280294167Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:30:19.333164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774419052.mount: Deactivated successfully. Sep 4 17:30:19.334363 containerd[1443]: time="2024-09-04T17:30:19.334299613Z" level=info msg="CreateContainer within sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\"" Sep 4 17:30:19.334885 containerd[1443]: time="2024-09-04T17:30:19.334853574Z" level=info msg="StartContainer for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\"" Sep 4 17:30:19.363520 systemd[1]: Started cri-containerd-96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb.scope - libcontainer container 96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb. Sep 4 17:30:19.399892 containerd[1443]: time="2024-09-04T17:30:19.399849790Z" level=info msg="StartContainer for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" returns successfully" Sep 4 17:30:19.502369 kubelet[2525]: I0904 17:30:19.502293 2525 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:30:19.529489 kubelet[2525]: I0904 17:30:19.528439 2525 topology_manager.go:215] "Topology Admit Handler" podUID="3950aa12-ae63-4855-946f-d0aae5ab6532" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g87zf" Sep 4 17:30:19.531403 kubelet[2525]: I0904 17:30:19.530129 2525 topology_manager.go:215] "Topology Admit Handler" podUID="16475769-1be8-4af2-9afa-cfd8096ff503" podNamespace="kube-system" podName="coredns-7db6d8ff4d-98khj" Sep 4 17:30:19.556671 systemd[1]: Created slice kubepods-burstable-pod16475769_1be8_4af2_9afa_cfd8096ff503.slice - libcontainer container kubepods-burstable-pod16475769_1be8_4af2_9afa_cfd8096ff503.slice. Sep 4 17:30:19.576242 systemd[1]: Created slice kubepods-burstable-pod3950aa12_ae63_4855_946f_d0aae5ab6532.slice - libcontainer container kubepods-burstable-pod3950aa12_ae63_4855_946f_d0aae5ab6532.slice. Sep 4 17:30:19.582904 kubelet[2525]: I0904 17:30:19.582865 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj5gw\" (UniqueName: \"kubernetes.io/projected/3950aa12-ae63-4855-946f-d0aae5ab6532-kube-api-access-dj5gw\") pod \"coredns-7db6d8ff4d-g87zf\" (UID: \"3950aa12-ae63-4855-946f-d0aae5ab6532\") " pod="kube-system/coredns-7db6d8ff4d-g87zf" Sep 4 17:30:19.583034 kubelet[2525]: I0904 17:30:19.582942 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3950aa12-ae63-4855-946f-d0aae5ab6532-config-volume\") pod \"coredns-7db6d8ff4d-g87zf\" (UID: \"3950aa12-ae63-4855-946f-d0aae5ab6532\") " pod="kube-system/coredns-7db6d8ff4d-g87zf" Sep 4 17:30:19.583034 kubelet[2525]: I0904 17:30:19.582965 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvzz\" (UniqueName: \"kubernetes.io/projected/16475769-1be8-4af2-9afa-cfd8096ff503-kube-api-access-llvzz\") pod \"coredns-7db6d8ff4d-98khj\" (UID: \"16475769-1be8-4af2-9afa-cfd8096ff503\") " pod="kube-system/coredns-7db6d8ff4d-98khj" Sep 4 17:30:19.583034 kubelet[2525]: I0904 17:30:19.582983 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16475769-1be8-4af2-9afa-cfd8096ff503-config-volume\") pod \"coredns-7db6d8ff4d-98khj\" (UID: \"16475769-1be8-4af2-9afa-cfd8096ff503\") " pod="kube-system/coredns-7db6d8ff4d-98khj" Sep 4 17:30:19.866724 kubelet[2525]: E0904 17:30:19.865001 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:19.867678 containerd[1443]: time="2024-09-04T17:30:19.867278994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-98khj,Uid:16475769-1be8-4af2-9afa-cfd8096ff503,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:19.879473 kubelet[2525]: E0904 17:30:19.879434 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:19.882233 containerd[1443]: time="2024-09-04T17:30:19.882100726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g87zf,Uid:3950aa12-ae63-4855-946f-d0aae5ab6532,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:20.281431 kubelet[2525]: E0904 17:30:20.281391 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:20.296234 kubelet[2525]: I0904 17:30:20.295908 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4rzst" podStartSLOduration=5.689347387 podStartE2EDuration="13.295892188s" podCreationTimestamp="2024-09-04 17:30:07 +0000 UTC" firstStartedPulling="2024-09-04 17:30:08.077211939 +0000 UTC m=+16.969626443" lastFinishedPulling="2024-09-04 17:30:15.6837567 +0000 UTC m=+24.576171244" observedRunningTime="2024-09-04 17:30:20.294864387 +0000 UTC m=+29.187278931" watchObservedRunningTime="2024-09-04 17:30:20.295892188 +0000 UTC m=+29.188306692" Sep 4 17:30:21.283390 kubelet[2525]: E0904 17:30:21.283024 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:21.681395 systemd-networkd[1375]: cilium_host: Link UP Sep 4 17:30:21.682220 systemd-networkd[1375]: cilium_net: Link UP Sep 4 17:30:21.682321 systemd-networkd[1375]: cilium_net: Gained carrier Sep 4 17:30:21.682740 systemd-networkd[1375]: cilium_host: Gained carrier Sep 4 17:30:21.683092 systemd-networkd[1375]: cilium_net: Gained IPv6LL Sep 4 17:30:21.771554 systemd-networkd[1375]: cilium_vxlan: Link UP Sep 4 17:30:21.771562 systemd-networkd[1375]: cilium_vxlan: Gained carrier Sep 4 17:30:22.026475 systemd-networkd[1375]: cilium_host: Gained IPv6LL Sep 4 17:30:22.092384 kernel: NET: Registered PF_ALG protocol family Sep 4 17:30:22.284128 kubelet[2525]: E0904 17:30:22.284031 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:22.696029 systemd-networkd[1375]: lxc_health: Link UP Sep 4 17:30:22.701822 systemd-networkd[1375]: lxc_health: Gained carrier Sep 4 17:30:23.050736 systemd-networkd[1375]: lxcc261f7493601: Link UP Sep 4 17:30:23.060135 systemd-networkd[1375]: lxc98dad023cf2b: Link UP Sep 4 17:30:23.072363 kernel: eth0: renamed from tmpf3230 Sep 4 17:30:23.080410 kernel: eth0: renamed from tmp0c24f Sep 4 17:30:23.092267 systemd-networkd[1375]: lxcc261f7493601: Gained carrier Sep 4 17:30:23.092501 systemd-networkd[1375]: lxc98dad023cf2b: Gained carrier Sep 4 17:30:23.322521 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Sep 4 17:30:23.972512 kubelet[2525]: E0904 17:30:23.972469 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:24.045063 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:34710.service - OpenSSH per-connection server daemon (10.0.0.1:34710). Sep 4 17:30:24.084362 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 34710 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:24.088844 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:24.092853 systemd-logind[1420]: New session 8 of user core. Sep 4 17:30:24.101540 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:30:24.220518 systemd-networkd[1375]: lxc_health: Gained IPv6LL Sep 4 17:30:24.235006 sshd[3757]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:24.238795 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:34710.service: Deactivated successfully. Sep 4 17:30:24.242305 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:30:24.243133 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:30:24.243945 systemd-logind[1420]: Removed session 8. Sep 4 17:30:24.288147 kubelet[2525]: E0904 17:30:24.288120 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:24.346477 systemd-networkd[1375]: lxcc261f7493601: Gained IPv6LL Sep 4 17:30:24.602530 systemd-networkd[1375]: lxc98dad023cf2b: Gained IPv6LL Sep 4 17:30:25.290042 kubelet[2525]: E0904 17:30:25.289857 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:26.746759 containerd[1443]: time="2024-09-04T17:30:26.746634477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:26.746759 containerd[1443]: time="2024-09-04T17:30:26.746749117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:26.747186 containerd[1443]: time="2024-09-04T17:30:26.746786077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:26.747186 containerd[1443]: time="2024-09-04T17:30:26.746873597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:26.756920 containerd[1443]: time="2024-09-04T17:30:26.756453882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:26.756920 containerd[1443]: time="2024-09-04T17:30:26.756510042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:26.756920 containerd[1443]: time="2024-09-04T17:30:26.756524722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:26.756920 containerd[1443]: time="2024-09-04T17:30:26.756536602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:26.782565 systemd[1]: Started cri-containerd-0c24f86d833514200f61ac7c3f8e94e23e1ce6e8139b1dc9f15b7e9f64177eab.scope - libcontainer container 0c24f86d833514200f61ac7c3f8e94e23e1ce6e8139b1dc9f15b7e9f64177eab. Sep 4 17:30:26.783890 systemd[1]: Started cri-containerd-f3230b983ebbcebfa0c82158d5fc5a63a9e4a72d5918067a3e563c14ec80c708.scope - libcontainer container f3230b983ebbcebfa0c82158d5fc5a63a9e4a72d5918067a3e563c14ec80c708. Sep 4 17:30:26.797015 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:30:26.798251 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:30:26.818986 containerd[1443]: time="2024-09-04T17:30:26.818819156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g87zf,Uid:3950aa12-ae63-4855-946f-d0aae5ab6532,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c24f86d833514200f61ac7c3f8e94e23e1ce6e8139b1dc9f15b7e9f64177eab\"" Sep 4 17:30:26.820098 kubelet[2525]: E0904 17:30:26.819735 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:26.821167 containerd[1443]: time="2024-09-04T17:30:26.821130957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-98khj,Uid:16475769-1be8-4af2-9afa-cfd8096ff503,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3230b983ebbcebfa0c82158d5fc5a63a9e4a72d5918067a3e563c14ec80c708\"" Sep 4 17:30:26.821861 containerd[1443]: time="2024-09-04T17:30:26.821832358Z" level=info msg="CreateContainer within sandbox \"0c24f86d833514200f61ac7c3f8e94e23e1ce6e8139b1dc9f15b7e9f64177eab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:30:26.822175 kubelet[2525]: E0904 17:30:26.822095 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:26.825377 containerd[1443]: time="2024-09-04T17:30:26.825317360Z" level=info msg="CreateContainer within sandbox \"f3230b983ebbcebfa0c82158d5fc5a63a9e4a72d5918067a3e563c14ec80c708\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:30:26.839118 containerd[1443]: time="2024-09-04T17:30:26.839058967Z" level=info msg="CreateContainer within sandbox \"0c24f86d833514200f61ac7c3f8e94e23e1ce6e8139b1dc9f15b7e9f64177eab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"061a633053b7fafc616a88aa0d2e9e7aec41734de9c64fc5050ea99402b9ef29\"" Sep 4 17:30:26.839865 containerd[1443]: time="2024-09-04T17:30:26.839587608Z" level=info msg="StartContainer for \"061a633053b7fafc616a88aa0d2e9e7aec41734de9c64fc5050ea99402b9ef29\"" Sep 4 17:30:26.845073 containerd[1443]: time="2024-09-04T17:30:26.845030731Z" level=info msg="CreateContainer within sandbox \"f3230b983ebbcebfa0c82158d5fc5a63a9e4a72d5918067a3e563c14ec80c708\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d4c26094f099ada6cdf2529bbd1bb00d3f3deb742669c50d148a3694277d482\"" Sep 4 17:30:26.846875 containerd[1443]: time="2024-09-04T17:30:26.845907731Z" level=info msg="StartContainer for \"9d4c26094f099ada6cdf2529bbd1bb00d3f3deb742669c50d148a3694277d482\"" Sep 4 17:30:26.867401 systemd[1]: Started cri-containerd-061a633053b7fafc616a88aa0d2e9e7aec41734de9c64fc5050ea99402b9ef29.scope - libcontainer container 061a633053b7fafc616a88aa0d2e9e7aec41734de9c64fc5050ea99402b9ef29. Sep 4 17:30:26.870182 systemd[1]: Started cri-containerd-9d4c26094f099ada6cdf2529bbd1bb00d3f3deb742669c50d148a3694277d482.scope - libcontainer container 9d4c26094f099ada6cdf2529bbd1bb00d3f3deb742669c50d148a3694277d482. Sep 4 17:30:26.909578 containerd[1443]: time="2024-09-04T17:30:26.903695963Z" level=info msg="StartContainer for \"9d4c26094f099ada6cdf2529bbd1bb00d3f3deb742669c50d148a3694277d482\" returns successfully" Sep 4 17:30:26.909578 containerd[1443]: time="2024-09-04T17:30:26.906466244Z" level=info msg="StartContainer for \"061a633053b7fafc616a88aa0d2e9e7aec41734de9c64fc5050ea99402b9ef29\" returns successfully" Sep 4 17:30:27.296221 kubelet[2525]: E0904 17:30:27.296121 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:27.298387 kubelet[2525]: E0904 17:30:27.297592 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:27.305401 kubelet[2525]: I0904 17:30:27.305318 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g87zf" podStartSLOduration=19.305299653 podStartE2EDuration="19.305299653s" podCreationTimestamp="2024-09-04 17:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:27.304312773 +0000 UTC m=+36.196727317" watchObservedRunningTime="2024-09-04 17:30:27.305299653 +0000 UTC m=+36.197714197" Sep 4 17:30:28.298587 kubelet[2525]: E0904 17:30:28.298488 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:28.298587 kubelet[2525]: E0904 17:30:28.298525 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:29.246370 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:34712.service - OpenSSH per-connection server daemon (10.0.0.1:34712). Sep 4 17:30:29.288429 sshd[3950]: Accepted publickey for core from 10.0.0.1 port 34712 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:29.289158 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:29.292953 systemd-logind[1420]: New session 9 of user core. Sep 4 17:30:29.300378 kubelet[2525]: E0904 17:30:29.300305 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:29.303541 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:30:29.424450 sshd[3950]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:29.428096 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:34712.service: Deactivated successfully. Sep 4 17:30:29.430095 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:30:29.430808 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:30:29.431772 systemd-logind[1420]: Removed session 9. Sep 4 17:30:34.435018 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:58890.service - OpenSSH per-connection server daemon (10.0.0.1:58890). Sep 4 17:30:34.470025 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 58890 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:34.471436 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:34.475220 systemd-logind[1420]: New session 10 of user core. Sep 4 17:30:34.482556 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:30:34.596275 sshd[3967]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:34.600118 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:58890.service: Deactivated successfully. Sep 4 17:30:34.602281 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:30:34.603457 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:30:34.604960 systemd-logind[1420]: Removed session 10. Sep 4 17:30:39.634167 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:58902.service - OpenSSH per-connection server daemon (10.0.0.1:58902). Sep 4 17:30:39.670498 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 58902 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:39.671885 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:39.679013 systemd-logind[1420]: New session 11 of user core. Sep 4 17:30:39.689030 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:30:39.798495 sshd[3984]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:39.808879 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:58902.service: Deactivated successfully. Sep 4 17:30:39.810654 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:30:39.812051 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:30:39.825644 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:58904.service - OpenSSH per-connection server daemon (10.0.0.1:58904). Sep 4 17:30:39.828371 systemd-logind[1420]: Removed session 11. Sep 4 17:30:39.858846 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 58904 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:39.860500 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:39.866202 systemd-logind[1420]: New session 12 of user core. Sep 4 17:30:39.873518 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:30:40.030419 sshd[4000]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:40.042489 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:58904.service: Deactivated successfully. Sep 4 17:30:40.044847 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:30:40.049354 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:30:40.055902 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:58906.service - OpenSSH per-connection server daemon (10.0.0.1:58906). Sep 4 17:30:40.057901 systemd-logind[1420]: Removed session 12. Sep 4 17:30:40.088998 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 58906 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:40.090300 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:40.094547 systemd-logind[1420]: New session 13 of user core. Sep 4 17:30:40.099485 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:30:40.219376 sshd[4012]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:40.222749 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:58906.service: Deactivated successfully. Sep 4 17:30:40.224331 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:30:40.224928 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:30:40.226229 systemd-logind[1420]: Removed session 13. Sep 4 17:30:45.240008 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:44542.service - OpenSSH per-connection server daemon (10.0.0.1:44542). Sep 4 17:30:45.289299 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 44542 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:45.290754 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:45.295941 systemd-logind[1420]: New session 14 of user core. Sep 4 17:30:45.305528 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:30:45.427739 sshd[4027]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:45.431124 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:44542.service: Deactivated successfully. Sep 4 17:30:45.432929 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:30:45.433693 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:30:45.434694 systemd-logind[1420]: Removed session 14. Sep 4 17:30:50.438690 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:44558.service - OpenSSH per-connection server daemon (10.0.0.1:44558). Sep 4 17:30:50.478710 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 44558 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:50.480048 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:50.485049 systemd-logind[1420]: New session 15 of user core. Sep 4 17:30:50.493494 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:30:50.602081 sshd[4041]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:50.609771 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:44558.service: Deactivated successfully. Sep 4 17:30:50.611604 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:30:50.613206 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:30:50.624615 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:44570.service - OpenSSH per-connection server daemon (10.0.0.1:44570). Sep 4 17:30:50.625620 systemd-logind[1420]: Removed session 15. Sep 4 17:30:50.652938 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 44570 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:50.654251 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:50.657858 systemd-logind[1420]: New session 16 of user core. Sep 4 17:30:50.668482 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:30:50.867522 sshd[4056]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:50.877844 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:44570.service: Deactivated successfully. Sep 4 17:30:50.879428 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:30:50.880662 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:30:50.889601 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:44580.service - OpenSSH per-connection server daemon (10.0.0.1:44580). Sep 4 17:30:50.891065 systemd-logind[1420]: Removed session 16. Sep 4 17:30:50.921823 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 44580 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:50.923021 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:50.926612 systemd-logind[1420]: New session 17 of user core. Sep 4 17:30:50.937486 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:30:52.247515 sshd[4068]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:52.256030 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:44580.service: Deactivated successfully. Sep 4 17:30:52.260292 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:30:52.262292 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:30:52.272220 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:44596.service - OpenSSH per-connection server daemon (10.0.0.1:44596). Sep 4 17:30:52.276360 systemd-logind[1420]: Removed session 17. Sep 4 17:30:52.308265 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 44596 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:52.309714 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:52.313657 systemd-logind[1420]: New session 18 of user core. Sep 4 17:30:52.326507 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:30:52.539638 sshd[4090]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:52.550122 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:44596.service: Deactivated successfully. Sep 4 17:30:52.553183 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:30:52.556137 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:30:52.563621 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:56656.service - OpenSSH per-connection server daemon (10.0.0.1:56656). Sep 4 17:30:52.564478 systemd-logind[1420]: Removed session 18. Sep 4 17:30:52.595348 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 56656 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:52.596689 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:52.600438 systemd-logind[1420]: New session 19 of user core. Sep 4 17:30:52.604505 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:30:52.720930 sshd[4104]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:52.724707 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:56656.service: Deactivated successfully. Sep 4 17:30:52.726560 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:30:52.727230 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:30:52.728251 systemd-logind[1420]: Removed session 19. Sep 4 17:30:57.731910 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:56666.service - OpenSSH per-connection server daemon (10.0.0.1:56666). Sep 4 17:30:57.766773 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 56666 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:30:57.768190 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:57.771965 systemd-logind[1420]: New session 20 of user core. Sep 4 17:30:57.782524 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:30:57.893583 sshd[4121]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:57.897533 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:56666.service: Deactivated successfully. Sep 4 17:30:57.899232 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:30:57.900112 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:30:57.901377 systemd-logind[1420]: Removed session 20. Sep 4 17:31:02.920669 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:56042.service - OpenSSH per-connection server daemon (10.0.0.1:56042). Sep 4 17:31:02.953849 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 56042 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:31:02.955401 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:02.959640 systemd-logind[1420]: New session 21 of user core. Sep 4 17:31:02.969621 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:31:03.092871 sshd[4135]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:03.096787 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:56042.service: Deactivated successfully. Sep 4 17:31:03.098459 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:31:03.100405 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:31:03.101467 systemd-logind[1420]: Removed session 21. Sep 4 17:31:08.101852 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:56058.service - OpenSSH per-connection server daemon (10.0.0.1:56058). Sep 4 17:31:08.136883 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 56058 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:31:08.138213 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:08.142205 systemd-logind[1420]: New session 22 of user core. Sep 4 17:31:08.153547 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:31:08.205505 kubelet[2525]: E0904 17:31:08.205462 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:08.270354 sshd[4150]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:08.280826 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:56058.service: Deactivated successfully. Sep 4 17:31:08.282323 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:31:08.284624 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:31:08.292655 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:56066.service - OpenSSH per-connection server daemon (10.0.0.1:56066). Sep 4 17:31:08.293884 systemd-logind[1420]: Removed session 22. Sep 4 17:31:08.322473 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 56066 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:31:08.323850 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:08.328374 systemd-logind[1420]: New session 23 of user core. Sep 4 17:31:08.336541 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:31:09.884963 kubelet[2525]: I0904 17:31:09.884399 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-98khj" podStartSLOduration=61.884383052 podStartE2EDuration="1m1.884383052s" podCreationTimestamp="2024-09-04 17:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:27.328398865 +0000 UTC m=+36.220813409" watchObservedRunningTime="2024-09-04 17:31:09.884383052 +0000 UTC m=+78.776797596" Sep 4 17:31:09.900812 containerd[1443]: time="2024-09-04T17:31:09.900763018Z" level=info msg="StopContainer for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" with timeout 30 (s)" Sep 4 17:31:09.915566 containerd[1443]: time="2024-09-04T17:31:09.915451615Z" level=info msg="Stop container \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" with signal terminated" Sep 4 17:31:09.923176 containerd[1443]: time="2024-09-04T17:31:09.923120216Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:31:09.931983 containerd[1443]: time="2024-09-04T17:31:09.931623820Z" level=info msg="StopContainer for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" with timeout 2 (s)" Sep 4 17:31:09.932404 containerd[1443]: time="2024-09-04T17:31:09.932234424Z" level=info msg="Stop container \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" with signal terminated" Sep 4 17:31:09.934826 systemd[1]: cri-containerd-5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9.scope: Deactivated successfully. Sep 4 17:31:09.938616 systemd-networkd[1375]: lxc_health: Link DOWN Sep 4 17:31:09.938623 systemd-networkd[1375]: lxc_health: Lost carrier Sep 4 17:31:09.956691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9-rootfs.mount: Deactivated successfully. Sep 4 17:31:09.958890 systemd[1]: cri-containerd-96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb.scope: Deactivated successfully. Sep 4 17:31:09.959646 systemd[1]: cri-containerd-96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb.scope: Consumed 6.759s CPU time. Sep 4 17:31:09.976366 containerd[1443]: time="2024-09-04T17:31:09.976186615Z" level=info msg="shim disconnected" id=5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9 namespace=k8s.io Sep 4 17:31:09.976366 containerd[1443]: time="2024-09-04T17:31:09.976248936Z" level=warning msg="cleaning up after shim disconnected" id=5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9 namespace=k8s.io Sep 4 17:31:09.976366 containerd[1443]: time="2024-09-04T17:31:09.976257016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:09.977225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb-rootfs.mount: Deactivated successfully. Sep 4 17:31:09.981327 containerd[1443]: time="2024-09-04T17:31:09.981077561Z" level=info msg="shim disconnected" id=96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb namespace=k8s.io Sep 4 17:31:09.981327 containerd[1443]: time="2024-09-04T17:31:09.981153681Z" level=warning msg="cleaning up after shim disconnected" id=96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb namespace=k8s.io Sep 4 17:31:09.981327 containerd[1443]: time="2024-09-04T17:31:09.981163001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:10.016602 containerd[1443]: time="2024-09-04T17:31:10.016045343Z" level=info msg="StopContainer for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" returns successfully" Sep 4 17:31:10.016885 containerd[1443]: time="2024-09-04T17:31:10.016856867Z" level=info msg="StopContainer for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" returns successfully" Sep 4 17:31:10.018061 containerd[1443]: time="2024-09-04T17:31:10.017880193Z" level=info msg="StopPodSandbox for \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\"" Sep 4 17:31:10.019730 containerd[1443]: time="2024-09-04T17:31:10.018625636Z" level=info msg="StopPodSandbox for \"d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3\"" Sep 4 17:31:10.019818 containerd[1443]: time="2024-09-04T17:31:10.019724722Z" level=info msg="Container to stop \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:10.021100 containerd[1443]: time="2024-09-04T17:31:10.017927913Z" level=info msg="Container to stop \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:10.021208 containerd[1443]: time="2024-09-04T17:31:10.021190489Z" level=info msg="Container to stop \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:10.021282 containerd[1443]: time="2024-09-04T17:31:10.021254650Z" level=info msg="Container to stop \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:10.021373 containerd[1443]: time="2024-09-04T17:31:10.021326130Z" level=info msg="Container to stop \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:10.021444 containerd[1443]: time="2024-09-04T17:31:10.021429011Z" level=info msg="Container to stop \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:10.021460 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3-shm.mount: Deactivated successfully. Sep 4 17:31:10.023819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc-shm.mount: Deactivated successfully. Sep 4 17:31:10.026836 systemd[1]: cri-containerd-d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3.scope: Deactivated successfully. Sep 4 17:31:10.028471 systemd[1]: cri-containerd-0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc.scope: Deactivated successfully. Sep 4 17:31:10.052874 containerd[1443]: time="2024-09-04T17:31:10.052785772Z" level=info msg="shim disconnected" id=0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc namespace=k8s.io Sep 4 17:31:10.053190 containerd[1443]: time="2024-09-04T17:31:10.053167014Z" level=warning msg="cleaning up after shim disconnected" id=0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc namespace=k8s.io Sep 4 17:31:10.054432 containerd[1443]: time="2024-09-04T17:31:10.054395020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:10.054512 containerd[1443]: time="2024-09-04T17:31:10.053062253Z" level=info msg="shim disconnected" id=d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3 namespace=k8s.io Sep 4 17:31:10.054512 containerd[1443]: time="2024-09-04T17:31:10.054454300Z" level=warning msg="cleaning up after shim disconnected" id=d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3 namespace=k8s.io Sep 4 17:31:10.054512 containerd[1443]: time="2024-09-04T17:31:10.054461940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:10.067357 containerd[1443]: time="2024-09-04T17:31:10.067293366Z" level=info msg="TearDown network for sandbox \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" successfully" Sep 4 17:31:10.067357 containerd[1443]: time="2024-09-04T17:31:10.067327286Z" level=info msg="StopPodSandbox for \"0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc\" returns successfully" Sep 4 17:31:10.083453 containerd[1443]: time="2024-09-04T17:31:10.083389649Z" level=info msg="TearDown network for sandbox \"d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3\" successfully" Sep 4 17:31:10.083453 containerd[1443]: time="2024-09-04T17:31:10.083442489Z" level=info msg="StopPodSandbox for \"d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3\" returns successfully" Sep 4 17:31:10.193278 kubelet[2525]: I0904 17:31:10.193145 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hostproc\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193278 kubelet[2525]: I0904 17:31:10.193189 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-bpf-maps\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193278 kubelet[2525]: I0904 17:31:10.193205 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-kernel\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193278 kubelet[2525]: I0904 17:31:10.193227 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hubble-tls\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193278 kubelet[2525]: I0904 17:31:10.193245 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxbmq\" (UniqueName: \"kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-kube-api-access-jxbmq\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193278 kubelet[2525]: I0904 17:31:10.193263 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-clustermesh-secrets\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193690 kubelet[2525]: I0904 17:31:10.193290 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/058eeb97-8442-450e-868e-6d751e021b15-cilium-config-path\") pod \"058eeb97-8442-450e-868e-6d751e021b15\" (UID: \"058eeb97-8442-450e-868e-6d751e021b15\") " Sep 4 17:31:10.193690 kubelet[2525]: I0904 17:31:10.193307 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvjf\" (UniqueName: \"kubernetes.io/projected/058eeb97-8442-450e-868e-6d751e021b15-kube-api-access-mfvjf\") pod \"058eeb97-8442-450e-868e-6d751e021b15\" (UID: \"058eeb97-8442-450e-868e-6d751e021b15\") " Sep 4 17:31:10.193690 kubelet[2525]: I0904 17:31:10.193321 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-cgroup\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193690 kubelet[2525]: I0904 17:31:10.193411 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-config-path\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193690 kubelet[2525]: I0904 17:31:10.193434 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-xtables-lock\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193690 kubelet[2525]: I0904 17:31:10.193448 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-lib-modules\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193826 kubelet[2525]: I0904 17:31:10.193491 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cni-path\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193826 kubelet[2525]: I0904 17:31:10.193507 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-net\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193826 kubelet[2525]: I0904 17:31:10.193522 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-etc-cni-netd\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.193826 kubelet[2525]: I0904 17:31:10.193555 2525 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-run\") pod \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\" (UID: \"3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b\") " Sep 4 17:31:10.199240 kubelet[2525]: I0904 17:31:10.199155 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.199240 kubelet[2525]: I0904 17:31:10.199230 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.199486 kubelet[2525]: I0904 17:31:10.199429 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hostproc" (OuterVolumeSpecName: "hostproc") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.199535 kubelet[2525]: I0904 17:31:10.199488 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.199535 kubelet[2525]: I0904 17:31:10.199515 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cni-path" (OuterVolumeSpecName: "cni-path") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.205661 kubelet[2525]: I0904 17:31:10.203985 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.205661 kubelet[2525]: I0904 17:31:10.204044 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.205661 kubelet[2525]: I0904 17:31:10.204064 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.205661 kubelet[2525]: I0904 17:31:10.204080 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.205661 kubelet[2525]: I0904 17:31:10.205372 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:31:10.206139 kubelet[2525]: E0904 17:31:10.206107 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:10.206421 kubelet[2525]: I0904 17:31:10.206390 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:31:10.206708 kubelet[2525]: I0904 17:31:10.206610 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:31:10.206708 kubelet[2525]: I0904 17:31:10.206658 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:10.208041 kubelet[2525]: I0904 17:31:10.208010 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058eeb97-8442-450e-868e-6d751e021b15-kube-api-access-mfvjf" (OuterVolumeSpecName: "kube-api-access-mfvjf") pod "058eeb97-8442-450e-868e-6d751e021b15" (UID: "058eeb97-8442-450e-868e-6d751e021b15"). InnerVolumeSpecName "kube-api-access-mfvjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:31:10.208303 kubelet[2525]: I0904 17:31:10.208257 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058eeb97-8442-450e-868e-6d751e021b15-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "058eeb97-8442-450e-868e-6d751e021b15" (UID: "058eeb97-8442-450e-868e-6d751e021b15"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:31:10.208666 kubelet[2525]: I0904 17:31:10.208636 2525 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-kube-api-access-jxbmq" (OuterVolumeSpecName: "kube-api-access-jxbmq") pod "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" (UID: "3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b"). InnerVolumeSpecName "kube-api-access-jxbmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293747 2525 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293783 2525 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293796 2525 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jxbmq\" (UniqueName: \"kubernetes.io/projected/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-kube-api-access-jxbmq\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293805 2525 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293814 2525 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/058eeb97-8442-450e-868e-6d751e021b15-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293824 2525 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mfvjf\" (UniqueName: \"kubernetes.io/projected/058eeb97-8442-450e-868e-6d751e021b15-kube-api-access-mfvjf\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293832 2525 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.293820 kubelet[2525]: I0904 17:31:10.293840 2525 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293848 2525 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293856 2525 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293864 2525 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293872 2525 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293879 2525 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293888 2525 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293895 2525 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.294114 kubelet[2525]: I0904 17:31:10.293902 2525 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:10.404255 kubelet[2525]: I0904 17:31:10.404219 2525 scope.go:117] "RemoveContainer" containerID="96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb" Sep 4 17:31:10.407149 containerd[1443]: time="2024-09-04T17:31:10.406257385Z" level=info msg="RemoveContainer for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\"" Sep 4 17:31:10.410323 systemd[1]: Removed slice kubepods-burstable-pod3fb8c2b9_ddc1_47e4_b6b5_8a6c2ab3de7b.slice - libcontainer container kubepods-burstable-pod3fb8c2b9_ddc1_47e4_b6b5_8a6c2ab3de7b.slice. Sep 4 17:31:10.410421 systemd[1]: kubepods-burstable-pod3fb8c2b9_ddc1_47e4_b6b5_8a6c2ab3de7b.slice: Consumed 6.874s CPU time. Sep 4 17:31:10.412022 systemd[1]: Removed slice kubepods-besteffort-pod058eeb97_8442_450e_868e_6d751e021b15.slice - libcontainer container kubepods-besteffort-pod058eeb97_8442_450e_868e_6d751e021b15.slice. Sep 4 17:31:10.413457 kubelet[2525]: I0904 17:31:10.413106 2525 scope.go:117] "RemoveContainer" containerID="fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778" Sep 4 17:31:10.413549 containerd[1443]: time="2024-09-04T17:31:10.412898979Z" level=info msg="RemoveContainer for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" returns successfully" Sep 4 17:31:10.414312 containerd[1443]: time="2024-09-04T17:31:10.414283866Z" level=info msg="RemoveContainer for \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\"" Sep 4 17:31:10.418033 containerd[1443]: time="2024-09-04T17:31:10.417968125Z" level=info msg="RemoveContainer for \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\" returns successfully" Sep 4 17:31:10.418192 kubelet[2525]: I0904 17:31:10.418170 2525 scope.go:117] "RemoveContainer" containerID="67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd" Sep 4 17:31:10.420773 containerd[1443]: time="2024-09-04T17:31:10.420478418Z" level=info msg="RemoveContainer for \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\"" Sep 4 17:31:10.423175 containerd[1443]: time="2024-09-04T17:31:10.423143312Z" level=info msg="RemoveContainer for \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\" returns successfully" Sep 4 17:31:10.423460 kubelet[2525]: I0904 17:31:10.423437 2525 scope.go:117] "RemoveContainer" containerID="b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414" Sep 4 17:31:10.424401 containerd[1443]: time="2024-09-04T17:31:10.424374518Z" level=info msg="RemoveContainer for \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\"" Sep 4 17:31:10.427292 containerd[1443]: time="2024-09-04T17:31:10.427250933Z" level=info msg="RemoveContainer for \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\" returns successfully" Sep 4 17:31:10.427432 kubelet[2525]: I0904 17:31:10.427412 2525 scope.go:117] "RemoveContainer" containerID="5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d" Sep 4 17:31:10.428878 containerd[1443]: time="2024-09-04T17:31:10.428382979Z" level=info msg="RemoveContainer for \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\"" Sep 4 17:31:10.430956 containerd[1443]: time="2024-09-04T17:31:10.430926392Z" level=info msg="RemoveContainer for \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\" returns successfully" Sep 4 17:31:10.431206 kubelet[2525]: I0904 17:31:10.431173 2525 scope.go:117] "RemoveContainer" containerID="96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb" Sep 4 17:31:10.436220 containerd[1443]: time="2024-09-04T17:31:10.431412274Z" level=error msg="ContainerStatus for \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\": not found" Sep 4 17:31:10.438314 kubelet[2525]: E0904 17:31:10.438259 2525 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\": not found" containerID="96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb" Sep 4 17:31:10.438421 kubelet[2525]: I0904 17:31:10.438318 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb"} err="failed to get container status \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"96dd56fc5f450f340188166a30f41bef7d01b877c0ac365b9c650617bbe6f9bb\": not found" Sep 4 17:31:10.438421 kubelet[2525]: I0904 17:31:10.438414 2525 scope.go:117] "RemoveContainer" containerID="fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778" Sep 4 17:31:10.438654 containerd[1443]: time="2024-09-04T17:31:10.438621151Z" level=error msg="ContainerStatus for \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\": not found" Sep 4 17:31:10.438775 kubelet[2525]: E0904 17:31:10.438754 2525 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\": not found" containerID="fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778" Sep 4 17:31:10.438807 kubelet[2525]: I0904 17:31:10.438783 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778"} err="failed to get container status \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa382e58ef83fd63fc641673f83cefd12d983dfbef89ee5be110546067df6778\": not found" Sep 4 17:31:10.438807 kubelet[2525]: I0904 17:31:10.438801 2525 scope.go:117] "RemoveContainer" containerID="67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd" Sep 4 17:31:10.439004 containerd[1443]: time="2024-09-04T17:31:10.438959313Z" level=error msg="ContainerStatus for \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\": not found" Sep 4 17:31:10.439097 kubelet[2525]: E0904 17:31:10.439078 2525 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\": not found" containerID="67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd" Sep 4 17:31:10.439141 kubelet[2525]: I0904 17:31:10.439104 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd"} err="failed to get container status \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"67d15bf894e31768c74e8487f95e91bcf87a5b84a1428d627a2ba7239598e7bd\": not found" Sep 4 17:31:10.439167 kubelet[2525]: I0904 17:31:10.439142 2525 scope.go:117] "RemoveContainer" containerID="b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414" Sep 4 17:31:10.439371 containerd[1443]: time="2024-09-04T17:31:10.439311955Z" level=error msg="ContainerStatus for \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\": not found" Sep 4 17:31:10.439473 kubelet[2525]: E0904 17:31:10.439453 2525 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\": not found" containerID="b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414" Sep 4 17:31:10.439516 kubelet[2525]: I0904 17:31:10.439480 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414"} err="failed to get container status \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\": rpc error: code = NotFound desc = an error occurred when try to find container \"b979999c4bd42a6def8d206bcdec544619c1cf613586a1e24190d80bccefc414\": not found" Sep 4 17:31:10.439516 kubelet[2525]: I0904 17:31:10.439504 2525 scope.go:117] "RemoveContainer" containerID="5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d" Sep 4 17:31:10.439682 containerd[1443]: time="2024-09-04T17:31:10.439636637Z" level=error msg="ContainerStatus for \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\": not found" Sep 4 17:31:10.439761 kubelet[2525]: E0904 17:31:10.439743 2525 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\": not found" containerID="5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d" Sep 4 17:31:10.439795 kubelet[2525]: I0904 17:31:10.439764 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d"} err="failed to get container status \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a9b886ae9d3679c20126ce671bf1795bd7046eaf4009871ba9fe8ddf1e65d9d\": not found" Sep 4 17:31:10.439795 kubelet[2525]: I0904 17:31:10.439780 2525 scope.go:117] "RemoveContainer" containerID="5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9" Sep 4 17:31:10.440818 containerd[1443]: time="2024-09-04T17:31:10.440790762Z" level=info msg="RemoveContainer for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\"" Sep 4 17:31:10.442851 containerd[1443]: time="2024-09-04T17:31:10.442824413Z" level=info msg="RemoveContainer for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" returns successfully" Sep 4 17:31:10.442971 kubelet[2525]: I0904 17:31:10.442955 2525 scope.go:117] "RemoveContainer" containerID="5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9" Sep 4 17:31:10.443151 containerd[1443]: time="2024-09-04T17:31:10.443118614Z" level=error msg="ContainerStatus for \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\": not found" Sep 4 17:31:10.443251 kubelet[2525]: E0904 17:31:10.443233 2525 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\": not found" containerID="5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9" Sep 4 17:31:10.443350 kubelet[2525]: I0904 17:31:10.443276 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9"} err="failed to get container status \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c631b760bbb6aebe9bd46426d7bd97a09d7b7d75e6fbe6464d5268c51130ae9\": not found" Sep 4 17:31:10.909151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d064f9e03e081580cdf18014068a75f975631776679a6b6b4a3919cb74ea69a3-rootfs.mount: Deactivated successfully. Sep 4 17:31:10.909244 systemd[1]: var-lib-kubelet-pods-058eeb97\x2d8442\x2d450e\x2d868e\x2d6d751e021b15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmfvjf.mount: Deactivated successfully. Sep 4 17:31:10.909314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e17b179d4efab63f5738901c3f9992958d49ae0e598fb40bb72aeea074a5fbc-rootfs.mount: Deactivated successfully. Sep 4 17:31:10.909389 systemd[1]: var-lib-kubelet-pods-3fb8c2b9\x2dddc1\x2d47e4\x2db6b5\x2d8a6c2ab3de7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djxbmq.mount: Deactivated successfully. Sep 4 17:31:10.909440 systemd[1]: var-lib-kubelet-pods-3fb8c2b9\x2dddc1\x2d47e4\x2db6b5\x2d8a6c2ab3de7b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:31:10.909491 systemd[1]: var-lib-kubelet-pods-3fb8c2b9\x2dddc1\x2d47e4\x2db6b5\x2d8a6c2ab3de7b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:31:11.207039 kubelet[2525]: I0904 17:31:11.206940 2525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058eeb97-8442-450e-868e-6d751e021b15" path="/var/lib/kubelet/pods/058eeb97-8442-450e-868e-6d751e021b15/volumes" Sep 4 17:31:11.207394 kubelet[2525]: I0904 17:31:11.207365 2525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" path="/var/lib/kubelet/pods/3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b/volumes" Sep 4 17:31:11.261802 kubelet[2525]: E0904 17:31:11.261739 2525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:31:11.850488 sshd[4165]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:11.860980 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:56066.service: Deactivated successfully. Sep 4 17:31:11.862840 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:31:11.865105 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:31:11.873953 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:56072.service - OpenSSH per-connection server daemon (10.0.0.1:56072). Sep 4 17:31:11.875701 systemd-logind[1420]: Removed session 23. Sep 4 17:31:11.906835 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 56072 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:31:11.907221 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:11.911788 systemd-logind[1420]: New session 24 of user core. Sep 4 17:31:11.917508 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:31:12.577828 sshd[4329]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:12.582367 kubelet[2525]: I0904 17:31:12.581573 2525 topology_manager.go:215] "Topology Admit Handler" podUID="a6d8abf5-0af7-430e-bd84-7f2a3807d40d" podNamespace="kube-system" podName="cilium-57jrz" Sep 4 17:31:12.582367 kubelet[2525]: E0904 17:31:12.581758 2525 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" containerName="mount-cgroup" Sep 4 17:31:12.582367 kubelet[2525]: E0904 17:31:12.581773 2525 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" containerName="apply-sysctl-overwrites" Sep 4 17:31:12.582367 kubelet[2525]: E0904 17:31:12.581779 2525 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="058eeb97-8442-450e-868e-6d751e021b15" containerName="cilium-operator" Sep 4 17:31:12.582367 kubelet[2525]: E0904 17:31:12.581785 2525 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" containerName="mount-bpf-fs" Sep 4 17:31:12.582367 kubelet[2525]: E0904 17:31:12.581790 2525 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" containerName="clean-cilium-state" Sep 4 17:31:12.582367 kubelet[2525]: E0904 17:31:12.581796 2525 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" containerName="cilium-agent" Sep 4 17:31:12.582367 kubelet[2525]: I0904 17:31:12.581817 2525 memory_manager.go:354] "RemoveStaleState removing state" podUID="058eeb97-8442-450e-868e-6d751e021b15" containerName="cilium-operator" Sep 4 17:31:12.582367 kubelet[2525]: I0904 17:31:12.581824 2525 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb8c2b9-ddc1-47e4-b6b5-8a6c2ab3de7b" containerName="cilium-agent" Sep 4 17:31:12.588959 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:56072.service: Deactivated successfully. Sep 4 17:31:12.590633 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:31:12.594248 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:31:12.601756 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:57116.service - OpenSSH per-connection server daemon (10.0.0.1:57116). Sep 4 17:31:12.607477 kubelet[2525]: I0904 17:31:12.607447 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-cilium-config-path\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607585 kubelet[2525]: I0904 17:31:12.607480 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-host-proc-sys-kernel\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607585 kubelet[2525]: I0904 17:31:12.607504 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-cilium-ipsec-secrets\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607585 kubelet[2525]: I0904 17:31:12.607521 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-hostproc\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607585 kubelet[2525]: I0904 17:31:12.607542 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-cilium-cgroup\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607585 kubelet[2525]: I0904 17:31:12.607559 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-lib-modules\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607585 kubelet[2525]: I0904 17:31:12.607575 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-clustermesh-secrets\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607773 kubelet[2525]: I0904 17:31:12.607592 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-host-proc-sys-net\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607773 kubelet[2525]: I0904 17:31:12.607607 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-bpf-maps\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607773 kubelet[2525]: I0904 17:31:12.607622 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-hubble-tls\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607773 kubelet[2525]: I0904 17:31:12.607637 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-cilium-run\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607773 kubelet[2525]: I0904 17:31:12.607651 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-cni-path\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607773 kubelet[2525]: I0904 17:31:12.607668 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-etc-cni-netd\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607901 kubelet[2525]: I0904 17:31:12.607691 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-xtables-lock\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.607901 kubelet[2525]: I0904 17:31:12.607707 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgk7\" (UniqueName: \"kubernetes.io/projected/a6d8abf5-0af7-430e-bd84-7f2a3807d40d-kube-api-access-5vgk7\") pod \"cilium-57jrz\" (UID: \"a6d8abf5-0af7-430e-bd84-7f2a3807d40d\") " pod="kube-system/cilium-57jrz" Sep 4 17:31:12.608674 systemd-logind[1420]: Removed session 24. Sep 4 17:31:12.614226 systemd[1]: Created slice kubepods-burstable-poda6d8abf5_0af7_430e_bd84_7f2a3807d40d.slice - libcontainer container kubepods-burstable-poda6d8abf5_0af7_430e_bd84_7f2a3807d40d.slice. Sep 4 17:31:12.643779 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 57116 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:31:12.644498 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:12.653575 systemd-logind[1420]: New session 25 of user core. Sep 4 17:31:12.661549 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:31:12.723261 sshd[4343]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:12.737005 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:57116.service: Deactivated successfully. Sep 4 17:31:12.738652 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:31:12.740014 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:31:12.750639 systemd[1]: Started sshd@25-10.0.0.103:22-10.0.0.1:57130.service - OpenSSH per-connection server daemon (10.0.0.1:57130). Sep 4 17:31:12.752413 systemd-logind[1420]: Removed session 25. Sep 4 17:31:12.780837 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 57130 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:31:12.781548 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:12.785377 systemd-logind[1420]: New session 26 of user core. Sep 4 17:31:12.791492 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:31:12.925402 kubelet[2525]: E0904 17:31:12.925250 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:12.926744 containerd[1443]: time="2024-09-04T17:31:12.926638341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57jrz,Uid:a6d8abf5-0af7-430e-bd84-7f2a3807d40d,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:12.954055 containerd[1443]: time="2024-09-04T17:31:12.953912713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:12.954055 containerd[1443]: time="2024-09-04T17:31:12.953977394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:12.954531 containerd[1443]: time="2024-09-04T17:31:12.954480756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:12.954613 containerd[1443]: time="2024-09-04T17:31:12.954516996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:12.972572 systemd[1]: Started cri-containerd-94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2.scope - libcontainer container 94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2. Sep 4 17:31:12.999068 containerd[1443]: time="2024-09-04T17:31:12.999024213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57jrz,Uid:a6d8abf5-0af7-430e-bd84-7f2a3807d40d,Namespace:kube-system,Attempt:0,} returns sandbox id \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\"" Sep 4 17:31:12.999694 kubelet[2525]: E0904 17:31:12.999668 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:13.001553 containerd[1443]: time="2024-09-04T17:31:13.001519785Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:31:13.014927 containerd[1443]: time="2024-09-04T17:31:13.014801648Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5\"" Sep 4 17:31:13.015320 containerd[1443]: time="2024-09-04T17:31:13.015265490Z" level=info msg="StartContainer for \"7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5\"" Sep 4 17:31:13.039562 systemd[1]: Started cri-containerd-7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5.scope - libcontainer container 7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5. Sep 4 17:31:13.062323 containerd[1443]: time="2024-09-04T17:31:13.062260593Z" level=info msg="StartContainer for \"7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5\" returns successfully" Sep 4 17:31:13.073671 systemd[1]: cri-containerd-7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5.scope: Deactivated successfully. Sep 4 17:31:13.101633 containerd[1443]: time="2024-09-04T17:31:13.101466259Z" level=info msg="shim disconnected" id=7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5 namespace=k8s.io Sep 4 17:31:13.101633 containerd[1443]: time="2024-09-04T17:31:13.101524499Z" level=warning msg="cleaning up after shim disconnected" id=7d508e33622f70fea8f2f154d1b0259b09b9d050e8079414c83d63f079717eb5 namespace=k8s.io Sep 4 17:31:13.101633 containerd[1443]: time="2024-09-04T17:31:13.101533099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:13.206058 kubelet[2525]: E0904 17:31:13.205892 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:13.263602 kubelet[2525]: I0904 17:31:13.262599 2525 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:31:13Z","lastTransitionTime":"2024-09-04T17:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:31:13.416313 kubelet[2525]: E0904 17:31:13.416251 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:13.420033 containerd[1443]: time="2024-09-04T17:31:13.419996450Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:31:13.429502 containerd[1443]: time="2024-09-04T17:31:13.429382454Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074\"" Sep 4 17:31:13.430034 containerd[1443]: time="2024-09-04T17:31:13.430006297Z" level=info msg="StartContainer for \"53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074\"" Sep 4 17:31:13.457497 systemd[1]: Started cri-containerd-53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074.scope - libcontainer container 53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074. Sep 4 17:31:13.481906 containerd[1443]: time="2024-09-04T17:31:13.481781863Z" level=info msg="StartContainer for \"53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074\" returns successfully" Sep 4 17:31:13.487784 systemd[1]: cri-containerd-53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074.scope: Deactivated successfully. Sep 4 17:31:13.506745 containerd[1443]: time="2024-09-04T17:31:13.506676821Z" level=info msg="shim disconnected" id=53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074 namespace=k8s.io Sep 4 17:31:13.506745 containerd[1443]: time="2024-09-04T17:31:13.506740861Z" level=warning msg="cleaning up after shim disconnected" id=53fbc123c88035e467636417c829ff7702a4daa04d8b1f9a45813840e4ff9074 namespace=k8s.io Sep 4 17:31:13.506745 containerd[1443]: time="2024-09-04T17:31:13.506750581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:14.205984 kubelet[2525]: E0904 17:31:14.205946 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:14.421072 kubelet[2525]: E0904 17:31:14.420954 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:14.422772 containerd[1443]: time="2024-09-04T17:31:14.422728754Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:31:14.442574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762236510.mount: Deactivated successfully. Sep 4 17:31:14.443813 containerd[1443]: time="2024-09-04T17:31:14.443678091Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1\"" Sep 4 17:31:14.445571 containerd[1443]: time="2024-09-04T17:31:14.444186694Z" level=info msg="StartContainer for \"f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1\"" Sep 4 17:31:14.472531 systemd[1]: Started cri-containerd-f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1.scope - libcontainer container f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1. Sep 4 17:31:14.503296 containerd[1443]: time="2024-09-04T17:31:14.500746675Z" level=info msg="StartContainer for \"f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1\" returns successfully" Sep 4 17:31:14.501524 systemd[1]: cri-containerd-f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1.scope: Deactivated successfully. Sep 4 17:31:14.538399 containerd[1443]: time="2024-09-04T17:31:14.538326849Z" level=info msg="shim disconnected" id=f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1 namespace=k8s.io Sep 4 17:31:14.538399 containerd[1443]: time="2024-09-04T17:31:14.538391249Z" level=warning msg="cleaning up after shim disconnected" id=f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1 namespace=k8s.io Sep 4 17:31:14.538399 containerd[1443]: time="2024-09-04T17:31:14.538400049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:14.712174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f894a1f7028cbccc1a634e34c1fbef5c770180f2d812932469109f8e9650ebc1-rootfs.mount: Deactivated successfully. Sep 4 17:31:15.424712 kubelet[2525]: E0904 17:31:15.424418 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:15.426551 containerd[1443]: time="2024-09-04T17:31:15.426513383Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:31:15.446612 containerd[1443]: time="2024-09-04T17:31:15.446539314Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c\"" Sep 4 17:31:15.447117 containerd[1443]: time="2024-09-04T17:31:15.447071156Z" level=info msg="StartContainer for \"de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c\"" Sep 4 17:31:15.480853 systemd[1]: Started cri-containerd-de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c.scope - libcontainer container de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c. Sep 4 17:31:15.503462 systemd[1]: cri-containerd-de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c.scope: Deactivated successfully. Sep 4 17:31:15.507935 containerd[1443]: time="2024-09-04T17:31:15.507887990Z" level=info msg="StartContainer for \"de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c\" returns successfully" Sep 4 17:31:15.530048 containerd[1443]: time="2024-09-04T17:31:15.529974169Z" level=info msg="shim disconnected" id=de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c namespace=k8s.io Sep 4 17:31:15.530048 containerd[1443]: time="2024-09-04T17:31:15.530042770Z" level=warning msg="cleaning up after shim disconnected" id=de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c namespace=k8s.io Sep 4 17:31:15.530048 containerd[1443]: time="2024-09-04T17:31:15.530051450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:15.712289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de3a251a9bf95422e59a5a08a4eccc54eadc067d7059fa8182b4add64ecc1d0c-rootfs.mount: Deactivated successfully. Sep 4 17:31:16.263358 kubelet[2525]: E0904 17:31:16.263287 2525 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:31:16.428781 kubelet[2525]: E0904 17:31:16.428723 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:16.434683 containerd[1443]: time="2024-09-04T17:31:16.433557030Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:31:16.446929 containerd[1443]: time="2024-09-04T17:31:16.446855888Z" level=info msg="CreateContainer within sandbox \"94e6b9f72e543e63f9da746580b56acd4e8cbe9f843df5e81e16272996eb31a2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2df6910f7006d8b3a7b4cb539073efddfdf36e6dd6c06f99626821faf41136aa\"" Sep 4 17:31:16.448948 containerd[1443]: time="2024-09-04T17:31:16.447480531Z" level=info msg="StartContainer for \"2df6910f7006d8b3a7b4cb539073efddfdf36e6dd6c06f99626821faf41136aa\"" Sep 4 17:31:16.475532 systemd[1]: Started cri-containerd-2df6910f7006d8b3a7b4cb539073efddfdf36e6dd6c06f99626821faf41136aa.scope - libcontainer container 2df6910f7006d8b3a7b4cb539073efddfdf36e6dd6c06f99626821faf41136aa. Sep 4 17:31:16.500026 containerd[1443]: time="2024-09-04T17:31:16.499917401Z" level=info msg="StartContainer for \"2df6910f7006d8b3a7b4cb539073efddfdf36e6dd6c06f99626821faf41136aa\" returns successfully" Sep 4 17:31:16.778361 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 17:31:17.433816 kubelet[2525]: E0904 17:31:17.433756 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:17.452897 kubelet[2525]: I0904 17:31:17.452834 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-57jrz" podStartSLOduration=5.452803735 podStartE2EDuration="5.452803735s" podCreationTimestamp="2024-09-04 17:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:17.452010852 +0000 UTC m=+86.344425396" watchObservedRunningTime="2024-09-04 17:31:17.452803735 +0000 UTC m=+86.345218279" Sep 4 17:31:18.926677 kubelet[2525]: E0904 17:31:18.926632 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:19.742760 systemd-networkd[1375]: lxc_health: Link UP Sep 4 17:31:19.755859 systemd-networkd[1375]: lxc_health: Gained carrier Sep 4 17:31:20.927395 kubelet[2525]: E0904 17:31:20.927329 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:21.434498 systemd-networkd[1375]: lxc_health: Gained IPv6LL Sep 4 17:31:21.442517 kubelet[2525]: E0904 17:31:21.441612 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:22.443358 kubelet[2525]: E0904 17:31:22.443134 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:25.548551 systemd[1]: run-containerd-runc-k8s.io-2df6910f7006d8b3a7b4cb539073efddfdf36e6dd6c06f99626821faf41136aa-runc.FOQgiQ.mount: Deactivated successfully. Sep 4 17:31:25.616787 sshd[4355]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:25.619353 systemd[1]: sshd@25-10.0.0.103:22-10.0.0.1:57130.service: Deactivated successfully. Sep 4 17:31:25.621161 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:31:25.622524 systemd-logind[1420]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:31:25.625430 systemd-logind[1420]: Removed session 26.