Sep 4 17:43:00.900165 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:43:00.900187 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:43:00.900197 kernel: KASLR enabled Sep 4 17:43:00.900204 kernel: efi: EFI v2.7 by EDK II Sep 4 17:43:00.900210 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:43:00.900216 kernel: random: crng init done Sep 4 17:43:00.900223 kernel: ACPI: Early table checksum verification disabled Sep 4 17:43:00.900229 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:43:00.900236 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:43:00.900243 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900250 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900257 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900263 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900269 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900277 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900285 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900292 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900299 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:43:00.900306 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:43:00.900312 kernel: NUMA: Failed to initialise from firmware Sep 4 17:43:00.900319 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:43:00.900326 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 4 17:43:00.900333 kernel: Zone ranges: Sep 4 17:43:00.900339 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:43:00.900346 kernel: DMA32 empty Sep 4 17:43:00.900354 kernel: Normal empty Sep 4 17:43:00.900360 kernel: Movable zone start for each node Sep 4 17:43:00.900367 kernel: Early memory node ranges Sep 4 17:43:00.900374 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:43:00.900381 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:43:00.900387 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:43:00.900394 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:43:00.900463 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:43:00.900471 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:43:00.900477 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:43:00.900484 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:43:00.900491 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:43:00.900501 kernel: psci: probing for conduit method from ACPI. Sep 4 17:43:00.900508 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:43:00.900514 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:43:00.900524 kernel: psci: Trusted OS migration not required Sep 4 17:43:00.900531 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:43:00.900539 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:43:00.900547 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:43:00.900554 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:43:00.900562 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:43:00.900569 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:43:00.900576 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:43:00.900583 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:43:00.900590 kernel: CPU features: detected: Spectre-v4 Sep 4 17:43:00.900605 kernel: CPU features: detected: Spectre-BHB Sep 4 17:43:00.900613 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:43:00.900620 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:43:00.900629 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:43:00.900636 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:43:00.900643 kernel: alternatives: applying boot alternatives Sep 4 17:43:00.900652 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:43:00.900659 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:43:00.900666 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:43:00.900674 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:43:00.900681 kernel: Fallback order for Node 0: 0 Sep 4 17:43:00.900688 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:43:00.900695 kernel: Policy zone: DMA Sep 4 17:43:00.900702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:43:00.900711 kernel: software IO TLB: area num 4. Sep 4 17:43:00.900718 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:43:00.900726 kernel: Memory: 2386592K/2572288K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 185696K reserved, 0K cma-reserved) Sep 4 17:43:00.900733 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:43:00.900740 kernel: trace event string verifier disabled Sep 4 17:43:00.900747 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:43:00.900755 kernel: rcu: RCU event tracing is enabled. Sep 4 17:43:00.900762 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:43:00.900770 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:43:00.900777 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:43:00.900784 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:43:00.900791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:43:00.900800 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:43:00.900807 kernel: GICv3: 256 SPIs implemented Sep 4 17:43:00.900814 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:43:00.900821 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:43:00.900828 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:43:00.900835 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:43:00.900843 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:43:00.900850 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:43:00.900857 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:43:00.900865 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:43:00.900872 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:43:00.900880 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:43:00.900887 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:43:00.900895 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:43:00.900902 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:43:00.900909 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:43:00.900917 kernel: arm-pv: using stolen time PV Sep 4 17:43:00.900924 kernel: Console: colour dummy device 80x25 Sep 4 17:43:00.900932 kernel: ACPI: Core revision 20230628 Sep 4 17:43:00.900939 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:43:00.900947 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:43:00.900955 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:43:00.900963 kernel: landlock: Up and running. Sep 4 17:43:00.900970 kernel: SELinux: Initializing. Sep 4 17:43:00.900978 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:43:00.900985 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:43:00.900992 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:43:00.901000 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:43:00.901008 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:43:00.901015 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:43:00.901024 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:43:00.901031 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:43:00.901039 kernel: Remapping and enabling EFI services. Sep 4 17:43:00.901046 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:43:00.901053 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:43:00.901061 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:43:00.901069 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:43:00.901076 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:43:00.901083 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:43:00.901091 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:43:00.901099 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:43:00.901107 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:43:00.901119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:43:00.901128 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:43:00.901136 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:43:00.901144 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:43:00.901152 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:43:00.901160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:43:00.901167 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:43:00.901177 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:43:00.901184 kernel: SMP: Total of 4 processors activated. Sep 4 17:43:00.901192 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:43:00.901200 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:43:00.901208 kernel: CPU features: detected: Common not Private translations Sep 4 17:43:00.901216 kernel: CPU features: detected: CRC32 instructions Sep 4 17:43:00.901224 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:43:00.901232 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:43:00.901241 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:43:00.901249 kernel: CPU features: detected: Privileged Access Never Sep 4 17:43:00.901256 kernel: CPU features: detected: RAS Extension Support Sep 4 17:43:00.901264 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:43:00.901272 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:43:00.901280 kernel: alternatives: applying system-wide alternatives Sep 4 17:43:00.901288 kernel: devtmpfs: initialized Sep 4 17:43:00.901296 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:43:00.901304 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:43:00.901313 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:43:00.901321 kernel: SMBIOS 3.0.0 present. Sep 4 17:43:00.901329 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:43:00.901337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:43:00.901345 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:43:00.901353 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:43:00.901361 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:43:00.901369 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:43:00.901376 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 4 17:43:00.901386 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:43:00.901393 kernel: cpuidle: using governor menu Sep 4 17:43:00.901408 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:43:00.901417 kernel: ASID allocator initialised with 32768 entries Sep 4 17:43:00.901424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:43:00.901432 kernel: Serial: AMBA PL011 UART driver Sep 4 17:43:00.901440 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:43:00.901448 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:43:00.901456 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:43:00.901466 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:43:00.901474 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:43:00.901482 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:43:00.901490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:43:00.901498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:43:00.901506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:43:00.901514 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:43:00.901522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:43:00.901530 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:43:00.901540 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:43:00.901548 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:43:00.901556 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:43:00.901564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:43:00.901572 kernel: ACPI: Interpreter enabled Sep 4 17:43:00.901579 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:43:00.901587 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:43:00.901599 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:43:00.901607 kernel: printk: console [ttyAMA0] enabled Sep 4 17:43:00.901616 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:43:00.901752 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:43:00.901830 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:43:00.901899 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:43:00.901965 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:43:00.902031 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:43:00.902041 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:43:00.902051 kernel: PCI host bridge to bus 0000:00 Sep 4 17:43:00.902123 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:43:00.902185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:43:00.902247 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:43:00.902323 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:43:00.902427 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:43:00.902510 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:43:00.902584 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:43:00.902663 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:43:00.902733 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:43:00.902802 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:43:00.902871 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:43:00.902939 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:43:00.903001 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:43:00.903063 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:43:00.903128 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:43:00.903138 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:43:00.903146 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:43:00.903155 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:43:00.903162 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:43:00.903171 kernel: iommu: Default domain type: Translated Sep 4 17:43:00.903178 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:43:00.903188 kernel: efivars: Registered efivars operations Sep 4 17:43:00.903196 kernel: vgaarb: loaded Sep 4 17:43:00.903203 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:43:00.903211 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:43:00.903219 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:43:00.903227 kernel: pnp: PnP ACPI init Sep 4 17:43:00.903306 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:43:00.903318 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:43:00.903328 kernel: NET: Registered PF_INET protocol family Sep 4 17:43:00.903336 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:43:00.903344 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:43:00.903352 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:43:00.903360 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:43:00.903368 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:43:00.903375 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:43:00.903383 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:43:00.903391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:43:00.903411 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:43:00.903419 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:43:00.903427 kernel: kvm [1]: HYP mode not available Sep 4 17:43:00.903435 kernel: Initialise system trusted keyrings Sep 4 17:43:00.903443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:43:00.903451 kernel: Key type asymmetric registered Sep 4 17:43:00.903458 kernel: Asymmetric key parser 'x509' registered Sep 4 17:43:00.903466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:43:00.903474 kernel: io scheduler mq-deadline registered Sep 4 17:43:00.903483 kernel: io scheduler kyber registered Sep 4 17:43:00.903491 kernel: io scheduler bfq registered Sep 4 17:43:00.903499 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:43:00.903507 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:43:00.903515 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:43:00.903589 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:43:00.903606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:43:00.903614 kernel: thunder_xcv, ver 1.0 Sep 4 17:43:00.903622 kernel: thunder_bgx, ver 1.0 Sep 4 17:43:00.903632 kernel: nicpf, ver 1.0 Sep 4 17:43:00.903640 kernel: nicvf, ver 1.0 Sep 4 17:43:00.903722 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:43:00.903790 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:43:00 UTC (1725471780) Sep 4 17:43:00.903800 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:43:00.903808 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:43:00.903816 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:43:00.903824 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:43:00.903834 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:43:00.903841 kernel: Segment Routing with IPv6 Sep 4 17:43:00.903849 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:43:00.903857 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:43:00.903865 kernel: Key type dns_resolver registered Sep 4 17:43:00.903872 kernel: registered taskstats version 1 Sep 4 17:43:00.903880 kernel: Loading compiled-in X.509 certificates Sep 4 17:43:00.903888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:43:00.903896 kernel: Key type .fscrypt registered Sep 4 17:43:00.903904 kernel: Key type fscrypt-provisioning registered Sep 4 17:43:00.903912 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:43:00.903920 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:43:00.903928 kernel: ima: No architecture policies found Sep 4 17:43:00.903936 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:43:00.903944 kernel: clk: Disabling unused clocks Sep 4 17:43:00.903951 kernel: Freeing unused kernel memory: 39296K Sep 4 17:43:00.903959 kernel: Run /init as init process Sep 4 17:43:00.903967 kernel: with arguments: Sep 4 17:43:00.903975 kernel: /init Sep 4 17:43:00.903983 kernel: with environment: Sep 4 17:43:00.903990 kernel: HOME=/ Sep 4 17:43:00.903998 kernel: TERM=linux Sep 4 17:43:00.904006 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:43:00.904015 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:43:00.904025 systemd[1]: Detected virtualization kvm. Sep 4 17:43:00.904034 systemd[1]: Detected architecture arm64. Sep 4 17:43:00.904043 systemd[1]: Running in initrd. Sep 4 17:43:00.904052 systemd[1]: No hostname configured, using default hostname. Sep 4 17:43:00.904060 systemd[1]: Hostname set to . Sep 4 17:43:00.904069 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:43:00.904077 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:43:00.904085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:43:00.904094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:43:00.904103 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:43:00.904112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:43:00.904121 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:43:00.904130 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:43:00.904140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:43:00.904149 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:43:00.904157 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:43:00.904166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:43:00.904175 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:43:00.904184 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:43:00.904192 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:43:00.904200 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:43:00.904209 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:43:00.904218 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:43:00.904226 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:43:00.904235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:43:00.904245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:43:00.904254 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:43:00.904262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:43:00.904271 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:43:00.904279 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:43:00.904288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:43:00.904297 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:43:00.904305 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:43:00.904313 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:43:00.904323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:43:00.904332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:00.904340 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:43:00.904348 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:43:00.904357 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:43:00.904380 systemd-journald[237]: Collecting audit messages is disabled. Sep 4 17:43:00.904412 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:43:00.904421 systemd-journald[237]: Journal started Sep 4 17:43:00.904442 systemd-journald[237]: Runtime Journal (/run/log/journal/3ffeed49dfb74ecb92fca1155c9839fe) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:43:00.904478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:43:00.895434 systemd-modules-load[239]: Inserted module 'overlay' Sep 4 17:43:00.908441 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:43:00.908466 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:43:00.910985 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 4 17:43:00.911828 kernel: Bridge firewalling registered Sep 4 17:43:00.911416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:00.912718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:43:00.925593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:43:00.927231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:43:00.929640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:43:00.932299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:43:00.939712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:43:00.940787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:43:00.942667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:43:00.944206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:43:00.954522 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:43:00.956444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:43:00.965223 dracut-cmdline[276]: dracut-dracut-053 Sep 4 17:43:00.967566 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:43:00.981849 systemd-resolved[280]: Positive Trust Anchors: Sep 4 17:43:00.981865 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:43:00.981896 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:43:00.986610 systemd-resolved[280]: Defaulting to hostname 'linux'. Sep 4 17:43:00.987500 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:43:00.989459 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:43:01.031423 kernel: SCSI subsystem initialized Sep 4 17:43:01.038420 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:43:01.045431 kernel: iscsi: registered transport (tcp) Sep 4 17:43:01.059416 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:43:01.059451 kernel: QLogic iSCSI HBA Driver Sep 4 17:43:01.100043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:43:01.113575 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:43:01.130854 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:43:01.130893 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:43:01.130917 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:43:01.177438 kernel: raid6: neonx8 gen() 15779 MB/s Sep 4 17:43:01.194432 kernel: raid6: neonx4 gen() 15643 MB/s Sep 4 17:43:01.211438 kernel: raid6: neonx2 gen() 13224 MB/s Sep 4 17:43:01.228435 kernel: raid6: neonx1 gen() 10458 MB/s Sep 4 17:43:01.245420 kernel: raid6: int64x8 gen() 6963 MB/s Sep 4 17:43:01.262430 kernel: raid6: int64x4 gen() 7333 MB/s Sep 4 17:43:01.279430 kernel: raid6: int64x2 gen() 6130 MB/s Sep 4 17:43:01.296425 kernel: raid6: int64x1 gen() 5059 MB/s Sep 4 17:43:01.296443 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Sep 4 17:43:01.313436 kernel: raid6: .... xor() 11913 MB/s, rmw enabled Sep 4 17:43:01.313469 kernel: raid6: using neon recovery algorithm Sep 4 17:43:01.318508 kernel: xor: measuring software checksum speed Sep 4 17:43:01.318542 kernel: 8regs : 19864 MB/sec Sep 4 17:43:01.319416 kernel: 32regs : 19725 MB/sec Sep 4 17:43:01.320553 kernel: arm64_neon : 27206 MB/sec Sep 4 17:43:01.320581 kernel: xor: using function: arm64_neon (27206 MB/sec) Sep 4 17:43:01.369421 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:43:01.380014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:43:01.391558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:43:01.402754 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 4 17:43:01.405962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:43:01.410545 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:43:01.423804 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Sep 4 17:43:01.448323 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:43:01.459589 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:43:01.496356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:43:01.503008 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:43:01.513715 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:43:01.514889 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:43:01.516467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:43:01.518388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:43:01.527512 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:43:01.535226 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:43:01.538544 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:43:01.546094 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:43:01.547390 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:43:01.546208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:43:01.549807 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:43:01.553867 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:43:01.553886 kernel: GPT:9289727 != 19775487 Sep 4 17:43:01.553896 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:43:01.553905 kernel: GPT:9289727 != 19775487 Sep 4 17:43:01.553914 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:43:01.553924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:43:01.554772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:43:01.554943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:01.560232 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:01.568642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:01.576429 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (520) Sep 4 17:43:01.578223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:43:01.580388 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Sep 4 17:43:01.582780 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:43:01.584130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:01.597441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:43:01.598320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:43:01.603281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:43:01.615534 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:43:01.616945 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:43:01.620784 disk-uuid[554]: Primary Header is updated. Sep 4 17:43:01.620784 disk-uuid[554]: Secondary Entries is updated. Sep 4 17:43:01.620784 disk-uuid[554]: Secondary Header is updated. Sep 4 17:43:01.623413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:43:01.634302 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:43:02.637274 disk-uuid[555]: The operation has completed successfully. Sep 4 17:43:02.638677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:43:02.660043 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:43:02.660140 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:43:02.681604 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:43:02.685534 sh[578]: Success Sep 4 17:43:02.697899 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:43:02.738763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:43:02.740436 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:43:02.741374 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:43:02.751161 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:43:02.751193 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:43:02.751204 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:43:02.751215 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:43:02.751726 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:43:02.755271 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:43:02.756537 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:43:02.765569 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:43:02.767013 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:43:02.774712 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:43:02.774755 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:43:02.774766 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:43:02.777433 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:43:02.783795 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:43:02.786417 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:43:02.790937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:43:02.799553 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:43:02.860831 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:43:02.870551 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:43:02.894041 systemd-networkd[764]: lo: Link UP Sep 4 17:43:02.894053 systemd-networkd[764]: lo: Gained carrier Sep 4 17:43:02.894754 systemd-networkd[764]: Enumeration completed Sep 4 17:43:02.895041 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:43:02.895730 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:02.895733 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:43:02.896688 systemd[1]: Reached target network.target - Network. Sep 4 17:43:02.896753 systemd-networkd[764]: eth0: Link UP Sep 4 17:43:02.896756 systemd-networkd[764]: eth0: Gained carrier Sep 4 17:43:02.896763 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:02.930706 ignition[674]: Ignition 2.19.0 Sep 4 17:43:02.930715 ignition[674]: Stage: fetch-offline Sep 4 17:43:02.931452 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:43:02.930751 ignition[674]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:02.930764 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:43:02.930950 ignition[674]: parsed url from cmdline: "" Sep 4 17:43:02.930954 ignition[674]: no config URL provided Sep 4 17:43:02.930958 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:43:02.930965 ignition[674]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:43:02.930988 ignition[674]: op(1): [started] loading QEMU firmware config module Sep 4 17:43:02.930992 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:43:02.938109 ignition[674]: op(1): [finished] loading QEMU firmware config module Sep 4 17:43:02.974695 ignition[674]: parsing config with SHA512: f2069b89f3c612a3c32f00a67ffe533dfccf8ff56214c2c97216b0c07228a5ec15923dc01556f9bb87e4ad0709ed740aa439a1c5b95e4d77bb838150a7dc4ea4 Sep 4 17:43:02.980930 unknown[674]: fetched base config from "system" Sep 4 17:43:02.980941 unknown[674]: fetched user config from "qemu" Sep 4 17:43:02.981370 ignition[674]: fetch-offline: fetch-offline passed Sep 4 17:43:02.981451 ignition[674]: Ignition finished successfully Sep 4 17:43:02.983837 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:43:02.985158 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:43:02.990563 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:43:03.000221 ignition[775]: Ignition 2.19.0 Sep 4 17:43:03.000231 ignition[775]: Stage: kargs Sep 4 17:43:03.000384 ignition[775]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:03.000394 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:43:03.001349 ignition[775]: kargs: kargs passed Sep 4 17:43:03.001390 ignition[775]: Ignition finished successfully Sep 4 17:43:03.003516 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:43:03.012536 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:43:03.021331 ignition[783]: Ignition 2.19.0 Sep 4 17:43:03.021341 ignition[783]: Stage: disks Sep 4 17:43:03.021527 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:03.021536 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:43:03.022359 ignition[783]: disks: disks passed Sep 4 17:43:03.025238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:43:03.022455 ignition[783]: Ignition finished successfully Sep 4 17:43:03.026540 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:43:03.028234 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:43:03.029612 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:43:03.031037 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:43:03.032648 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:43:03.043513 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:43:03.053360 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:43:03.057421 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:43:03.069488 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:43:03.114260 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:43:03.115900 kernel: EXT4-fs (vda9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:43:03.115608 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:43:03.127478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:43:03.129166 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:43:03.130371 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:43:03.130421 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:43:03.135588 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Sep 4 17:43:03.130444 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:43:03.136848 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:43:03.140734 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:43:03.140755 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:43:03.140766 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:43:03.141135 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:43:03.143991 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:43:03.144530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:43:03.185494 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:43:03.188438 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:43:03.192685 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:43:03.196371 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:43:03.261867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:43:03.269500 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:43:03.271718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:43:03.276408 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:43:03.290043 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:43:03.294113 ignition[917]: INFO : Ignition 2.19.0 Sep 4 17:43:03.294902 ignition[917]: INFO : Stage: mount Sep 4 17:43:03.296702 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:03.296702 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:43:03.296702 ignition[917]: INFO : mount: mount passed Sep 4 17:43:03.296702 ignition[917]: INFO : Ignition finished successfully Sep 4 17:43:03.297961 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:43:03.313496 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:43:03.750002 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:43:03.760651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:43:03.766640 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Sep 4 17:43:03.766673 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:43:03.766685 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:43:03.767801 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:43:03.770431 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:43:03.770845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:43:03.795571 ignition[946]: INFO : Ignition 2.19.0 Sep 4 17:43:03.795571 ignition[946]: INFO : Stage: files Sep 4 17:43:03.796986 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:03.796986 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:43:03.796986 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:43:03.799737 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:43:03.799737 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:43:03.799737 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:43:03.799737 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:43:03.799737 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:43:03.799259 unknown[946]: wrote ssh authorized keys file for user: core Sep 4 17:43:03.805506 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:43:03.805506 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:43:03.849422 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:43:03.898796 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:43:03.900338 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:43:03.900338 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 17:43:04.192289 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:43:04.246793 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:43:04.246793 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:43:04.249652 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:43:04.379691 systemd-networkd[764]: eth0: Gained IPv6LL Sep 4 17:43:04.424414 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:43:04.689054 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:43:04.689054 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:43:04.692343 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:43:04.710526 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:43:04.714273 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:43:04.716535 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:43:04.716535 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:43:04.716535 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:43:04.716535 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:43:04.716535 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:43:04.716535 ignition[946]: INFO : files: files passed Sep 4 17:43:04.716535 ignition[946]: INFO : Ignition finished successfully Sep 4 17:43:04.717256 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:43:04.726557 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:43:04.728770 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:43:04.730002 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:43:04.731355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:43:04.735655 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:43:04.738554 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:04.738554 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:04.740983 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:43:04.742990 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:43:04.744392 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:43:04.753593 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:43:04.771326 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:43:04.771446 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:43:04.773356 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:43:04.774951 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:43:04.776229 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:43:04.776972 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:43:04.791117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:43:04.793173 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:43:04.803559 systemd[1]: Stopped target network.target - Network. Sep 4 17:43:04.805226 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:43:04.806151 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:43:04.807629 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:43:04.808894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:43:04.809006 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:43:04.810828 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:43:04.812723 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:43:04.814210 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:43:04.815412 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:43:04.816916 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:43:04.818294 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:43:04.819988 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:43:04.821348 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:43:04.823050 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:43:04.824276 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:43:04.825375 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:43:04.825497 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:43:04.827515 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:43:04.829104 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:43:04.830470 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:43:04.831482 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:43:04.832955 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:43:04.833071 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:43:04.835067 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:43:04.835186 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:43:04.836641 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:43:04.838007 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:43:04.838713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:43:04.839702 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:43:04.840827 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:43:04.842330 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:43:04.842435 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:43:04.844180 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:43:04.844269 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:43:04.845597 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:43:04.845713 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:43:04.847037 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:43:04.847135 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:43:04.866597 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:43:04.867998 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:43:04.868894 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:43:04.870334 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:43:04.871907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:43:04.872045 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:43:04.873546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:43:04.873554 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 4 17:43:04.873649 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:43:04.877775 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:43:04.877868 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:43:04.880169 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:43:04.880719 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:43:04.880808 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:43:04.887262 ignition[1001]: INFO : Ignition 2.19.0 Sep 4 17:43:04.887262 ignition[1001]: INFO : Stage: umount Sep 4 17:43:04.887262 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:43:04.887262 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:43:04.887262 ignition[1001]: INFO : umount: umount passed Sep 4 17:43:04.887262 ignition[1001]: INFO : Ignition finished successfully Sep 4 17:43:04.884201 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:43:04.884330 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:43:04.891532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:43:04.892723 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:43:04.892782 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:43:04.894148 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:43:04.894190 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:43:04.895876 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:43:04.895924 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:43:04.898339 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:43:04.898393 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:43:04.900145 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:43:04.900228 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:43:04.901825 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:43:04.901927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:43:04.903353 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:43:04.904433 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:43:04.908880 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:43:04.908946 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:43:04.910305 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:43:04.910351 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:43:04.912025 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:43:04.912069 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:43:04.913649 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:43:04.913692 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:43:04.915281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:43:04.915324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:43:04.917471 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:43:04.919491 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:43:04.919584 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:43:04.934050 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:43:04.934186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:43:04.936354 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:43:04.936393 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:43:04.937454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:43:04.937498 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:43:04.939515 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:43:04.939576 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:43:04.941736 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:43:04.941780 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:43:04.943951 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:43:04.943993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:43:04.953580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:43:04.954642 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:43:04.954700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:43:04.956284 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:43:04.956323 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:43:04.958246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:43:04.958297 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:43:04.960271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:43:04.960313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:04.962171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:43:04.962247 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:43:04.964356 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:43:04.966431 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:43:04.975633 systemd[1]: Switching root. Sep 4 17:43:05.009423 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 4 17:43:05.009468 systemd-journald[237]: Journal stopped Sep 4 17:43:05.699124 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:43:05.699174 kernel: SELinux: policy capability open_perms=1 Sep 4 17:43:05.699188 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:43:05.699197 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:43:05.699217 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:43:05.699226 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:43:05.699236 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:43:05.699245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:43:05.699254 kernel: audit: type=1403 audit(1725471785.167:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:43:05.699273 systemd[1]: Successfully loaded SELinux policy in 39.222ms. Sep 4 17:43:05.699294 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.827ms. Sep 4 17:43:05.699307 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:43:05.699320 systemd[1]: Detected virtualization kvm. Sep 4 17:43:05.699330 systemd[1]: Detected architecture arm64. Sep 4 17:43:05.699341 systemd[1]: Detected first boot. Sep 4 17:43:05.699352 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:43:05.699362 zram_generator::config[1044]: No configuration found. Sep 4 17:43:05.699374 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:43:05.699384 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:43:05.699474 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:43:05.699490 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:43:05.699501 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:43:05.699512 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:43:05.699523 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:43:05.699533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:43:05.699559 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:43:05.699583 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:43:05.699598 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:43:05.699609 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:43:05.699620 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:43:05.699631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:43:05.699642 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:43:05.699653 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:43:05.699664 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:43:05.699674 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:43:05.699685 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:43:05.699698 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:43:05.699708 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:43:05.699718 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:43:05.699732 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:43:05.699743 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:43:05.699754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:43:05.699765 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:43:05.699775 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:43:05.699789 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:43:05.699799 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:43:05.699810 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:43:05.699820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:43:05.699831 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:43:05.699842 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:43:05.699853 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:43:05.699863 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:43:05.699874 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:43:05.699886 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:43:05.699896 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:43:05.699907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:43:05.699918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:43:05.699929 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:43:05.699940 systemd[1]: Reached target machines.target - Containers. Sep 4 17:43:05.699951 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:43:05.699962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:05.699973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:43:05.699986 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:43:05.699997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:43:05.700007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:43:05.700018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:43:05.700028 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:43:05.700038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:43:05.700049 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:43:05.700060 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:43:05.700072 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:43:05.700083 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:43:05.700093 kernel: fuse: init (API version 7.39) Sep 4 17:43:05.700103 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:43:05.700113 kernel: loop: module loaded Sep 4 17:43:05.700123 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:43:05.700134 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:43:05.700145 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:43:05.700155 kernel: ACPI: bus type drm_connector registered Sep 4 17:43:05.700166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:43:05.700177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:43:05.700189 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:43:05.700199 systemd[1]: Stopped verity-setup.service. Sep 4 17:43:05.700209 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:43:05.700220 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:43:05.700230 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:43:05.700240 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:43:05.700268 systemd-journald[1110]: Collecting audit messages is disabled. Sep 4 17:43:05.700296 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:43:05.700308 systemd-journald[1110]: Journal started Sep 4 17:43:05.700329 systemd-journald[1110]: Runtime Journal (/run/log/journal/3ffeed49dfb74ecb92fca1155c9839fe) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:43:05.700363 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:43:05.517120 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:43:05.535263 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:43:05.535639 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:43:05.703482 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:43:05.705457 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:43:05.706824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:43:05.708287 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:43:05.708475 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:43:05.709806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:43:05.709954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:43:05.711287 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:43:05.711522 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:43:05.713871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:43:05.714004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:43:05.715105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:43:05.715243 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:43:05.716295 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:43:05.716448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:43:05.717474 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:43:05.718819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:43:05.720105 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:43:05.731430 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:43:05.740495 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:43:05.742572 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:43:05.743722 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:43:05.743762 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:43:05.745340 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:43:05.747209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:43:05.749268 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:43:05.750136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:05.751104 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:43:05.753583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:43:05.754763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:43:05.759548 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:43:05.760783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:43:05.761004 systemd-journald[1110]: Time spent on flushing to /var/log/journal/3ffeed49dfb74ecb92fca1155c9839fe is 25.254ms for 858 entries. Sep 4 17:43:05.761004 systemd-journald[1110]: System Journal (/var/log/journal/3ffeed49dfb74ecb92fca1155c9839fe) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:43:05.799180 systemd-journald[1110]: Received client request to flush runtime journal. Sep 4 17:43:05.799234 kernel: loop0: detected capacity change from 0 to 194096 Sep 4 17:43:05.763644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:43:05.767484 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:43:05.775153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:43:05.777717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:43:05.781172 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:43:05.782557 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:43:05.783830 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:43:05.785057 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:43:05.791741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:43:05.794179 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:43:05.809420 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:43:05.809636 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:43:05.812036 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:43:05.813504 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:43:05.822579 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Sep 4 17:43:05.822593 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Sep 4 17:43:05.829432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:43:05.831341 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:43:05.831999 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:43:05.833421 kernel: loop1: detected capacity change from 0 to 65520 Sep 4 17:43:05.834170 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:43:05.843604 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:43:05.859766 kernel: loop2: detected capacity change from 0 to 114288 Sep 4 17:43:05.866932 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:43:05.875557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:43:05.886935 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 4 17:43:05.887890 kernel: loop3: detected capacity change from 0 to 194096 Sep 4 17:43:05.886953 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 4 17:43:05.890747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:43:05.898416 kernel: loop4: detected capacity change from 0 to 65520 Sep 4 17:43:05.903420 kernel: loop5: detected capacity change from 0 to 114288 Sep 4 17:43:05.910881 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:43:05.911545 (sd-merge)[1181]: Merged extensions into '/usr'. Sep 4 17:43:05.915240 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:43:05.915254 systemd[1]: Reloading... Sep 4 17:43:05.966436 zram_generator::config[1207]: No configuration found. Sep 4 17:43:06.042053 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:43:06.067058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:06.103202 systemd[1]: Reloading finished in 187 ms. Sep 4 17:43:06.129817 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:43:06.131236 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:43:06.142651 systemd[1]: Starting ensure-sysext.service... Sep 4 17:43:06.144343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:43:06.155253 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:43:06.155281 systemd[1]: Reloading... Sep 4 17:43:06.163711 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:43:06.164258 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:43:06.165039 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:43:06.165370 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 4 17:43:06.165631 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 4 17:43:06.168035 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:43:06.168122 systemd-tmpfiles[1242]: Skipping /boot Sep 4 17:43:06.174827 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:43:06.174912 systemd-tmpfiles[1242]: Skipping /boot Sep 4 17:43:06.207448 zram_generator::config[1267]: No configuration found. Sep 4 17:43:06.289874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:06.325212 systemd[1]: Reloading finished in 169 ms. Sep 4 17:43:06.339174 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:43:06.346846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:43:06.356873 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:43:06.359256 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:43:06.361861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:43:06.365878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:43:06.370554 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:43:06.374167 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:43:06.378394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:06.382733 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:43:06.389741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:43:06.393523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:43:06.394478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:06.395293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:43:06.395482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:43:06.399009 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:43:06.399143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:43:06.403952 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:43:06.407946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:43:06.408133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:43:06.409306 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Sep 4 17:43:06.412141 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:43:06.417577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:43:06.425771 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:43:06.429847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:43:06.433664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:43:06.442585 augenrules[1352]: No rules Sep 4 17:43:06.444665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:43:06.445743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:43:06.449023 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:43:06.451614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:43:06.453388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:43:06.456433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:43:06.457835 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:43:06.459487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:43:06.459678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:43:06.460946 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:43:06.461065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:43:06.462616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:43:06.462756 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:43:06.464647 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:43:06.464764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:43:06.470198 systemd[1]: Finished ensure-sysext.service. Sep 4 17:43:06.471231 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:43:06.482475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1345) Sep 4 17:43:06.483858 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:43:06.492467 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1351) Sep 4 17:43:06.496826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:43:06.497479 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1351) Sep 4 17:43:06.498650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:43:06.498734 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:43:06.509621 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:43:06.510717 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:43:06.521554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:43:06.524350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:43:06.535795 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:43:06.550760 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:43:06.596857 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:43:06.597956 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:43:06.605992 systemd-resolved[1308]: Positive Trust Anchors: Sep 4 17:43:06.606009 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:43:06.606041 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:43:06.610351 systemd-networkd[1371]: lo: Link UP Sep 4 17:43:06.610358 systemd-networkd[1371]: lo: Gained carrier Sep 4 17:43:06.611092 systemd-networkd[1371]: Enumeration completed Sep 4 17:43:06.613980 systemd-resolved[1308]: Defaulting to hostname 'linux'. Sep 4 17:43:06.620652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:43:06.621722 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:06.621733 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:43:06.621910 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:43:06.622354 systemd-networkd[1371]: eth0: Link UP Sep 4 17:43:06.622362 systemd-networkd[1371]: eth0: Gained carrier Sep 4 17:43:06.622374 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:43:06.623209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:43:06.624719 systemd[1]: Reached target network.target - Network. Sep 4 17:43:06.625734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:43:06.628212 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:43:06.639864 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:43:06.642481 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:43:06.642919 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:43:06.643248 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Sep 4 17:43:06.645973 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:43:06.646033 systemd-timesyncd[1374]: Initial clock synchronization to Wed 2024-09-04 17:43:06.843003 UTC. Sep 4 17:43:06.658551 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:43:06.669910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:43:06.691487 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:43:06.692623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:43:06.693722 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:43:06.694587 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:43:06.695793 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:43:06.697182 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:43:06.698386 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:43:06.699297 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:43:06.700234 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:43:06.700265 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:43:06.700943 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:43:06.702420 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:43:06.704694 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:43:06.713297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:43:06.715516 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:43:06.716987 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:43:06.718146 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:43:06.719104 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:43:06.719865 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:43:06.719899 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:43:06.720818 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:43:06.722758 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:43:06.724226 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:43:06.725536 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:43:06.728656 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:43:06.731718 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:43:06.734721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:43:06.738292 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:43:06.739359 jq[1407]: false Sep 4 17:43:06.743010 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:43:06.745126 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:43:06.748448 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:43:06.750119 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:43:06.750529 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:43:06.751158 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:43:06.752759 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:43:06.753442 extend-filesystems[1408]: Found loop3 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found loop4 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found loop5 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda1 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda2 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda3 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found usr Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda4 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda6 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda7 Sep 4 17:43:06.753442 extend-filesystems[1408]: Found vda9 Sep 4 17:43:06.775720 extend-filesystems[1408]: Checking size of /dev/vda9 Sep 4 17:43:06.769157 dbus-daemon[1406]: [system] SELinux support is enabled Sep 4 17:43:06.755033 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:43:06.759907 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:43:06.760480 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:43:06.781030 jq[1417]: true Sep 4 17:43:06.761714 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:43:06.762482 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:43:06.769951 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:43:06.779831 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:43:06.779857 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:43:06.780972 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:43:06.780989 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:43:06.787456 jq[1431]: true Sep 4 17:43:06.788809 extend-filesystems[1408]: Resized partition /dev/vda9 Sep 4 17:43:06.792354 extend-filesystems[1440]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:43:06.791805 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:43:06.792001 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:43:06.799728 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:43:06.812498 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:43:06.816371 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:43:06.818699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1337) Sep 4 17:43:06.818348 systemd-logind[1415]: New seat seat0. Sep 4 17:43:06.819000 tar[1422]: linux-arm64/helm Sep 4 17:43:06.819174 update_engine[1416]: I0904 17:43:06.818718 1416 main.cc:92] Flatcar Update Engine starting Sep 4 17:43:06.819715 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:43:06.823628 update_engine[1416]: I0904 17:43:06.823230 1416 update_check_scheduler.cc:74] Next update check in 3m45s Sep 4 17:43:06.825984 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:43:06.839488 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:43:06.839797 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:43:06.851235 extend-filesystems[1440]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:43:06.851235 extend-filesystems[1440]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:43:06.851235 extend-filesystems[1440]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:43:06.857715 extend-filesystems[1408]: Resized filesystem in /dev/vda9 Sep 4 17:43:06.851693 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:43:06.853442 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:43:06.890915 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:43:06.891550 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:43:06.894100 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:43:06.904828 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:43:07.020459 containerd[1439]: time="2024-09-04T17:43:07.019675664Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:43:07.053428 containerd[1439]: time="2024-09-04T17:43:07.053340895Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.054999 containerd[1439]: time="2024-09-04T17:43:07.054962944Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:07.055161 containerd[1439]: time="2024-09-04T17:43:07.055142994Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:43:07.055225 containerd[1439]: time="2024-09-04T17:43:07.055212383Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:43:07.055514 containerd[1439]: time="2024-09-04T17:43:07.055490758Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:43:07.055643 containerd[1439]: time="2024-09-04T17:43:07.055626175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.055775 containerd[1439]: time="2024-09-04T17:43:07.055754583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:07.055939 containerd[1439]: time="2024-09-04T17:43:07.055844669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.056574 containerd[1439]: time="2024-09-04T17:43:07.056173333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:07.056574 containerd[1439]: time="2024-09-04T17:43:07.056196572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.056574 containerd[1439]: time="2024-09-04T17:43:07.056222106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:07.056574 containerd[1439]: time="2024-09-04T17:43:07.056232599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.056574 containerd[1439]: time="2024-09-04T17:43:07.056317275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.056574 containerd[1439]: time="2024-09-04T17:43:07.056538926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:43:07.057002 containerd[1439]: time="2024-09-04T17:43:07.056975464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:43:07.057311 containerd[1439]: time="2024-09-04T17:43:07.057117766Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:43:07.057311 containerd[1439]: time="2024-09-04T17:43:07.057220313Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:43:07.057311 containerd[1439]: time="2024-09-04T17:43:07.057275766Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:43:07.061946 containerd[1439]: time="2024-09-04T17:43:07.061913747Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062118020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062156218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062173678Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062187818Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062316309Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062566075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062672023Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062689196Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062704853Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062719526Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062734567Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062752601Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062776906Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063495 containerd[1439]: time="2024-09-04T17:43:07.062792357Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062804940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062817645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062828425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062850393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062864328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062876378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062888961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062901543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062914618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062932036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062947939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062961218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062976219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.063782 containerd[1439]: time="2024-09-04T17:43:07.062988105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063001958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063016344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063032410Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063052657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063078109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063098110Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063203854Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063219838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063230658Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063242052Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063251274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063263242Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063272873Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:43:07.064015 containerd[1439]: time="2024-09-04T17:43:07.063283284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:43:07.064824 containerd[1439]: time="2024-09-04T17:43:07.064707823Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:43:07.065070 containerd[1439]: time="2024-09-04T17:43:07.065043701Z" level=info msg="Connect containerd service" Sep 4 17:43:07.065228 containerd[1439]: time="2024-09-04T17:43:07.065209570Z" level=info msg="using legacy CRI server" Sep 4 17:43:07.065384 containerd[1439]: time="2024-09-04T17:43:07.065274532Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:43:07.065771 containerd[1439]: time="2024-09-04T17:43:07.065571064Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:43:07.066756 containerd[1439]: time="2024-09-04T17:43:07.066670914Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:43:07.067477 containerd[1439]: time="2024-09-04T17:43:07.067435830Z" level=info msg="Start subscribing containerd event" Sep 4 17:43:07.067588 containerd[1439]: time="2024-09-04T17:43:07.067575468Z" level=info msg="Start recovering state" Sep 4 17:43:07.068799 containerd[1439]: time="2024-09-04T17:43:07.068754995Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:43:07.069393 containerd[1439]: time="2024-09-04T17:43:07.068831188Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:43:07.069577 containerd[1439]: time="2024-09-04T17:43:07.069481795Z" level=info msg="Start event monitor" Sep 4 17:43:07.069577 containerd[1439]: time="2024-09-04T17:43:07.069511837Z" level=info msg="Start snapshots syncer" Sep 4 17:43:07.069577 containerd[1439]: time="2024-09-04T17:43:07.069523354Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:43:07.069577 containerd[1439]: time="2024-09-04T17:43:07.069533191Z" level=info msg="Start streaming server" Sep 4 17:43:07.069751 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:43:07.070871 containerd[1439]: time="2024-09-04T17:43:07.070842273Z" level=info msg="containerd successfully booted in 0.055364s" Sep 4 17:43:07.192897 tar[1422]: linux-arm64/LICENSE Sep 4 17:43:07.193083 tar[1422]: linux-arm64/README.md Sep 4 17:43:07.211023 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:43:07.488060 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:43:07.508828 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:43:07.518730 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:43:07.524798 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:43:07.526482 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:43:07.529180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:43:07.544264 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:43:07.554808 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:43:07.557189 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:43:07.558560 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:43:08.285062 systemd-networkd[1371]: eth0: Gained IPv6LL Sep 4 17:43:08.287193 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:43:08.290581 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:43:08.303672 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:43:08.306090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:08.308283 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:43:08.323086 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:43:08.323300 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:43:08.325571 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:43:08.330914 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:43:08.795881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:08.797496 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:43:08.798650 systemd[1]: Startup finished in 543ms (kernel) + 4.460s (initrd) + 3.669s (userspace) = 8.674s. Sep 4 17:43:08.799387 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:09.289008 kubelet[1521]: E0904 17:43:09.288848 1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:09.291488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:09.291635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:13.676056 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:43:13.677187 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:47824.service - OpenSSH per-connection server daemon (10.0.0.1:47824). Sep 4 17:43:13.729366 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 47824 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:13.730944 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:13.743460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:43:13.757612 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:43:13.759390 systemd-logind[1415]: New session 1 of user core. Sep 4 17:43:13.767450 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:43:13.769428 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:43:13.775522 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:13.851064 systemd[1540]: Queued start job for default target default.target. Sep 4 17:43:13.859388 systemd[1540]: Created slice app.slice - User Application Slice. Sep 4 17:43:13.859453 systemd[1540]: Reached target paths.target - Paths. Sep 4 17:43:13.859466 systemd[1540]: Reached target timers.target - Timers. Sep 4 17:43:13.860592 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:43:13.869101 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:43:13.869170 systemd[1540]: Reached target sockets.target - Sockets. Sep 4 17:43:13.869182 systemd[1540]: Reached target basic.target - Basic System. Sep 4 17:43:13.869218 systemd[1540]: Reached target default.target - Main User Target. Sep 4 17:43:13.869246 systemd[1540]: Startup finished in 86ms. Sep 4 17:43:13.869548 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:43:13.870717 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:43:13.934645 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:47832.service - OpenSSH per-connection server daemon (10.0.0.1:47832). Sep 4 17:43:13.978529 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 47832 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:13.979704 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:13.983467 systemd-logind[1415]: New session 2 of user core. Sep 4 17:43:13.999630 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:43:14.050856 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:14.059540 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:47832.service: Deactivated successfully. Sep 4 17:43:14.060708 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:43:14.061878 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:43:14.071636 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:47846.service - OpenSSH per-connection server daemon (10.0.0.1:47846). Sep 4 17:43:14.072443 systemd-logind[1415]: Removed session 2. Sep 4 17:43:14.103993 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 47846 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:14.105144 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:14.108840 systemd-logind[1415]: New session 3 of user core. Sep 4 17:43:14.119591 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:43:14.167661 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:14.180639 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:47846.service: Deactivated successfully. Sep 4 17:43:14.182691 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:43:14.183842 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:43:14.184911 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:47860.service - OpenSSH per-connection server daemon (10.0.0.1:47860). Sep 4 17:43:14.185651 systemd-logind[1415]: Removed session 3. Sep 4 17:43:14.220100 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 47860 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:14.221177 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:14.224455 systemd-logind[1415]: New session 4 of user core. Sep 4 17:43:14.232545 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:43:14.282649 sshd[1566]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:14.294743 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:47860.service: Deactivated successfully. Sep 4 17:43:14.296132 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:43:14.298482 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:43:14.299579 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:47870.service - OpenSSH per-connection server daemon (10.0.0.1:47870). Sep 4 17:43:14.300314 systemd-logind[1415]: Removed session 4. Sep 4 17:43:14.334588 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 47870 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:14.335662 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:14.339421 systemd-logind[1415]: New session 5 of user core. Sep 4 17:43:14.349547 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:43:14.410657 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:43:14.410924 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:14.425178 sudo[1576]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:14.428568 sshd[1573]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:14.435680 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:47870.service: Deactivated successfully. Sep 4 17:43:14.436982 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:43:14.438214 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:43:14.439343 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:47872.service - OpenSSH per-connection server daemon (10.0.0.1:47872). Sep 4 17:43:14.440060 systemd-logind[1415]: Removed session 5. Sep 4 17:43:14.475243 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 47872 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:14.476370 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:14.479531 systemd-logind[1415]: New session 6 of user core. Sep 4 17:43:14.492531 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:43:14.542702 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:43:14.542974 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:14.545820 sudo[1585]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:14.550037 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:43:14.550296 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:14.563626 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:43:14.564681 auditctl[1588]: No rules Sep 4 17:43:14.565482 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:43:14.567450 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:43:14.568890 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:43:14.590959 augenrules[1606]: No rules Sep 4 17:43:14.593463 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:43:14.594651 sudo[1584]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:14.595899 sshd[1581]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:14.609600 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:47872.service: Deactivated successfully. Sep 4 17:43:14.610899 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:43:14.612093 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:43:14.613122 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:47880.service - OpenSSH per-connection server daemon (10.0.0.1:47880). Sep 4 17:43:14.613839 systemd-logind[1415]: Removed session 6. Sep 4 17:43:14.648755 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 47880 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:43:14.649913 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:43:14.653473 systemd-logind[1415]: New session 7 of user core. Sep 4 17:43:14.665540 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:43:14.715057 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:43:14.715621 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:43:14.833655 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:43:14.833807 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:43:15.085322 dockerd[1628]: time="2024-09-04T17:43:15.085196755Z" level=info msg="Starting up" Sep 4 17:43:15.231957 dockerd[1628]: time="2024-09-04T17:43:15.231905988Z" level=info msg="Loading containers: start." Sep 4 17:43:15.318429 kernel: Initializing XFRM netlink socket Sep 4 17:43:15.382958 systemd-networkd[1371]: docker0: Link UP Sep 4 17:43:15.400686 dockerd[1628]: time="2024-09-04T17:43:15.400635586Z" level=info msg="Loading containers: done." Sep 4 17:43:15.415862 dockerd[1628]: time="2024-09-04T17:43:15.415816487Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:43:15.415977 dockerd[1628]: time="2024-09-04T17:43:15.415912614Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:43:15.416038 dockerd[1628]: time="2024-09-04T17:43:15.416006724Z" level=info msg="Daemon has completed initialization" Sep 4 17:43:15.441045 dockerd[1628]: time="2024-09-04T17:43:15.440888919Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:43:15.441072 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:43:15.994340 containerd[1439]: time="2024-09-04T17:43:15.994291649Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:43:16.658840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1360708209.mount: Deactivated successfully. Sep 4 17:43:18.839591 containerd[1439]: time="2024-09-04T17:43:18.839542425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:18.840610 containerd[1439]: time="2024-09-04T17:43:18.840322143Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=29943742" Sep 4 17:43:18.841268 containerd[1439]: time="2024-09-04T17:43:18.841222421Z" level=info msg="ImageCreate event name:\"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:18.844336 containerd[1439]: time="2024-09-04T17:43:18.844304565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:18.845627 containerd[1439]: time="2024-09-04T17:43:18.845562943Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"29940540\" in 2.851225213s" Sep 4 17:43:18.845627 containerd[1439]: time="2024-09-04T17:43:18.845598262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\"" Sep 4 17:43:18.863834 containerd[1439]: time="2024-09-04T17:43:18.863801536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:43:19.542151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:43:19.555903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:19.654714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:19.657525 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:19.697653 kubelet[1851]: E0904 17:43:19.697490 1851 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:19.700038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:19.700166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:21.152421 containerd[1439]: time="2024-09-04T17:43:21.152317139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:21.152822 containerd[1439]: time="2024-09-04T17:43:21.152765436Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=26881134" Sep 4 17:43:21.153684 containerd[1439]: time="2024-09-04T17:43:21.153652233Z" level=info msg="ImageCreate event name:\"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:21.156970 containerd[1439]: time="2024-09-04T17:43:21.156917755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:21.158101 containerd[1439]: time="2024-09-04T17:43:21.158063091Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"28368399\" in 2.294224001s" Sep 4 17:43:21.158144 containerd[1439]: time="2024-09-04T17:43:21.158100392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\"" Sep 4 17:43:21.176438 containerd[1439]: time="2024-09-04T17:43:21.176365940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:43:22.410805 containerd[1439]: time="2024-09-04T17:43:22.410757951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:22.412184 containerd[1439]: time="2024-09-04T17:43:22.412147676Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=16154065" Sep 4 17:43:22.413461 containerd[1439]: time="2024-09-04T17:43:22.413384696Z" level=info msg="ImageCreate event name:\"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:22.418065 containerd[1439]: time="2024-09-04T17:43:22.418018893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:22.419298 containerd[1439]: time="2024-09-04T17:43:22.419242669Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"17641348\" in 1.242840678s" Sep 4 17:43:22.419298 containerd[1439]: time="2024-09-04T17:43:22.419276140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\"" Sep 4 17:43:22.438609 containerd[1439]: time="2024-09-04T17:43:22.438549974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:43:23.470381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374759065.mount: Deactivated successfully. Sep 4 17:43:23.695154 containerd[1439]: time="2024-09-04T17:43:23.695093590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:23.695878 containerd[1439]: time="2024-09-04T17:43:23.695839834Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=25646049" Sep 4 17:43:23.696635 containerd[1439]: time="2024-09-04T17:43:23.696443185Z" level=info msg="ImageCreate event name:\"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:23.699982 containerd[1439]: time="2024-09-04T17:43:23.699918467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:23.700674 containerd[1439]: time="2024-09-04T17:43:23.700598800Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"25645066\" in 1.262011707s" Sep 4 17:43:23.700674 containerd[1439]: time="2024-09-04T17:43:23.700630893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\"" Sep 4 17:43:23.718257 containerd[1439]: time="2024-09-04T17:43:23.718226618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:43:24.307390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272826611.mount: Deactivated successfully. Sep 4 17:43:26.160749 containerd[1439]: time="2024-09-04T17:43:26.160549682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:26.161643 containerd[1439]: time="2024-09-04T17:43:26.161393564Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Sep 4 17:43:26.162319 containerd[1439]: time="2024-09-04T17:43:26.162288345Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:26.165347 containerd[1439]: time="2024-09-04T17:43:26.165292308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:26.166561 containerd[1439]: time="2024-09-04T17:43:26.166509276Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.44824536s" Sep 4 17:43:26.166561 containerd[1439]: time="2024-09-04T17:43:26.166542741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:43:26.185131 containerd[1439]: time="2024-09-04T17:43:26.185101242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:43:26.648174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451915405.mount: Deactivated successfully. Sep 4 17:43:26.653105 containerd[1439]: time="2024-09-04T17:43:26.652935997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:26.653691 containerd[1439]: time="2024-09-04T17:43:26.653487951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:43:26.654422 containerd[1439]: time="2024-09-04T17:43:26.654342934Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:26.656514 containerd[1439]: time="2024-09-04T17:43:26.656461415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:26.657441 containerd[1439]: time="2024-09-04T17:43:26.657412666Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 472.264692ms" Sep 4 17:43:26.657521 containerd[1439]: time="2024-09-04T17:43:26.657445850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:43:26.675553 containerd[1439]: time="2024-09-04T17:43:26.675378374Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:43:27.312946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390339671.mount: Deactivated successfully. Sep 4 17:43:29.440868 containerd[1439]: time="2024-09-04T17:43:29.440819188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:29.441787 containerd[1439]: time="2024-09-04T17:43:29.441518620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Sep 4 17:43:29.442437 containerd[1439]: time="2024-09-04T17:43:29.442391078Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:29.445644 containerd[1439]: time="2024-09-04T17:43:29.445603708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:43:29.447038 containerd[1439]: time="2024-09-04T17:43:29.446878370Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.771456957s" Sep 4 17:43:29.447038 containerd[1439]: time="2024-09-04T17:43:29.446915378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Sep 4 17:43:29.745712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:43:29.756627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:29.846575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:29.849794 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:43:29.887787 kubelet[2061]: E0904 17:43:29.887745 2061 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:43:29.889591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:43:29.889713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:43:34.648995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:34.659638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:34.674724 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-7.scope)... Sep 4 17:43:34.674922 systemd[1]: Reloading... Sep 4 17:43:34.746463 zram_generator::config[2134]: No configuration found. Sep 4 17:43:34.874542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:34.926360 systemd[1]: Reloading finished in 251 ms. Sep 4 17:43:34.959342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:34.961834 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:34.963245 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:43:34.964480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:34.965882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:35.060618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:35.064208 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:43:35.101736 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:43:35.101736 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:43:35.101736 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:43:35.102638 kubelet[2180]: I0904 17:43:35.102590 2180 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:43:36.038449 kubelet[2180]: I0904 17:43:36.038412 2180 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:43:36.039440 kubelet[2180]: I0904 17:43:36.038622 2180 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:43:36.039440 kubelet[2180]: I0904 17:43:36.038826 2180 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:43:36.066337 kubelet[2180]: I0904 17:43:36.066296 2180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:43:36.066829 kubelet[2180]: E0904 17:43:36.066801 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.074450 kubelet[2180]: I0904 17:43:36.074426 2180 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:43:36.075462 kubelet[2180]: I0904 17:43:36.075424 2180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:43:36.075659 kubelet[2180]: I0904 17:43:36.075463 2180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:43:36.075742 kubelet[2180]: I0904 17:43:36.075731 2180 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:43:36.075742 kubelet[2180]: I0904 17:43:36.075741 2180 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:43:36.076003 kubelet[2180]: I0904 17:43:36.075988 2180 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:43:36.077467 kubelet[2180]: I0904 17:43:36.077436 2180 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:43:36.077467 kubelet[2180]: I0904 17:43:36.077464 2180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:43:36.077698 kubelet[2180]: I0904 17:43:36.077685 2180 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:43:36.077738 kubelet[2180]: I0904 17:43:36.077704 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:43:36.078452 kubelet[2180]: W0904 17:43:36.078301 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.078452 kubelet[2180]: E0904 17:43:36.078364 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.078452 kubelet[2180]: W0904 17:43:36.078370 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.078452 kubelet[2180]: E0904 17:43:36.078442 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.079456 kubelet[2180]: I0904 17:43:36.078754 2180 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:43:36.079456 kubelet[2180]: I0904 17:43:36.079121 2180 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:43:36.079456 kubelet[2180]: W0904 17:43:36.079157 2180 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:43:36.080253 kubelet[2180]: I0904 17:43:36.080232 2180 server.go:1264] "Started kubelet" Sep 4 17:43:36.082958 kubelet[2180]: I0904 17:43:36.082791 2180 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:43:36.083737 kubelet[2180]: I0904 17:43:36.083699 2180 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:43:36.084110 kubelet[2180]: I0904 17:43:36.084045 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:43:36.084351 kubelet[2180]: I0904 17:43:36.084321 2180 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:43:36.087446 kubelet[2180]: I0904 17:43:36.085588 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:43:36.087446 kubelet[2180]: E0904 17:43:36.083609 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21b798b68ac21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:43:36.080206881 +0000 UTC m=+1.012868275,LastTimestamp:2024-09-04 17:43:36.080206881 +0000 UTC m=+1.012868275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:43:36.089358 kubelet[2180]: E0904 17:43:36.089164 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:43:36.089358 kubelet[2180]: I0904 17:43:36.089269 2180 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:43:36.089358 kubelet[2180]: I0904 17:43:36.089348 2180 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:43:36.090624 kubelet[2180]: W0904 17:43:36.090250 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.090624 kubelet[2180]: E0904 17:43:36.090305 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.090624 kubelet[2180]: E0904 17:43:36.090560 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Sep 4 17:43:36.093917 kubelet[2180]: I0904 17:43:36.093259 2180 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:43:36.095457 kubelet[2180]: I0904 17:43:36.094810 2180 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:43:36.095457 kubelet[2180]: I0904 17:43:36.094832 2180 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:43:36.095457 kubelet[2180]: I0904 17:43:36.094913 2180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:43:36.096467 kubelet[2180]: E0904 17:43:36.096432 2180 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:43:36.105096 kubelet[2180]: I0904 17:43:36.104994 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:43:36.106217 kubelet[2180]: I0904 17:43:36.106194 2180 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:43:36.106217 kubelet[2180]: I0904 17:43:36.106209 2180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:43:36.106296 kubelet[2180]: I0904 17:43:36.106224 2180 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:43:36.106712 kubelet[2180]: I0904 17:43:36.106684 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:43:36.106889 kubelet[2180]: I0904 17:43:36.106832 2180 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:43:36.106889 kubelet[2180]: I0904 17:43:36.106870 2180 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:43:36.106935 kubelet[2180]: E0904 17:43:36.106908 2180 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:43:36.107412 kubelet[2180]: W0904 17:43:36.107359 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.107464 kubelet[2180]: E0904 17:43:36.107432 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.169681 kubelet[2180]: I0904 17:43:36.169637 2180 policy_none.go:49] "None policy: Start" Sep 4 17:43:36.170364 kubelet[2180]: I0904 17:43:36.170343 2180 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:43:36.170454 kubelet[2180]: I0904 17:43:36.170371 2180 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:43:36.175657 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:43:36.189073 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:43:36.190436 kubelet[2180]: I0904 17:43:36.190263 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:43:36.190753 kubelet[2180]: E0904 17:43:36.190723 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 4 17:43:36.192145 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:43:36.205423 kubelet[2180]: I0904 17:43:36.205193 2180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:43:36.205498 kubelet[2180]: I0904 17:43:36.205395 2180 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:43:36.205555 kubelet[2180]: I0904 17:43:36.205534 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:43:36.207083 kubelet[2180]: I0904 17:43:36.206994 2180 topology_manager.go:215] "Topology Admit Handler" podUID="fbd463756fc7a198a6b79d51b5f0ac0f" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:43:36.207231 kubelet[2180]: E0904 17:43:36.207111 2180 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:43:36.207896 kubelet[2180]: I0904 17:43:36.207827 2180 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:43:36.208533 kubelet[2180]: I0904 17:43:36.208488 2180 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:43:36.213842 systemd[1]: Created slice kubepods-burstable-podfbd463756fc7a198a6b79d51b5f0ac0f.slice - libcontainer container kubepods-burstable-podfbd463756fc7a198a6b79d51b5f0ac0f.slice. Sep 4 17:43:36.228874 systemd[1]: Created slice kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice - libcontainer container kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice. Sep 4 17:43:36.242233 systemd[1]: Created slice kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice - libcontainer container kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice. Sep 4 17:43:36.291430 kubelet[2180]: E0904 17:43:36.291292 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Sep 4 17:43:36.294632 kubelet[2180]: I0904 17:43:36.294588 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbd463756fc7a198a6b79d51b5f0ac0f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbd463756fc7a198a6b79d51b5f0ac0f\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:36.294632 kubelet[2180]: I0904 17:43:36.294621 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:36.294719 kubelet[2180]: I0904 17:43:36.294641 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:36.294719 kubelet[2180]: I0904 17:43:36.294663 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:36.294719 kubelet[2180]: I0904 17:43:36.294703 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbd463756fc7a198a6b79d51b5f0ac0f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbd463756fc7a198a6b79d51b5f0ac0f\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:36.294780 kubelet[2180]: I0904 17:43:36.294741 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbd463756fc7a198a6b79d51b5f0ac0f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbd463756fc7a198a6b79d51b5f0ac0f\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:36.294780 kubelet[2180]: I0904 17:43:36.294764 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:36.294827 kubelet[2180]: I0904 17:43:36.294780 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:36.294827 kubelet[2180]: I0904 17:43:36.294795 2180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:43:36.391950 kubelet[2180]: I0904 17:43:36.391901 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:43:36.392327 kubelet[2180]: E0904 17:43:36.392284 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 4 17:43:36.528169 kubelet[2180]: E0904 17:43:36.528118 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:36.528830 containerd[1439]: time="2024-09-04T17:43:36.528794019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbd463756fc7a198a6b79d51b5f0ac0f,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:36.540579 kubelet[2180]: E0904 17:43:36.540553 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:36.540992 containerd[1439]: time="2024-09-04T17:43:36.540953410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:36.544315 kubelet[2180]: E0904 17:43:36.544236 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:36.545014 containerd[1439]: time="2024-09-04T17:43:36.544985276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:36.691789 kubelet[2180]: E0904 17:43:36.691740 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Sep 4 17:43:36.794261 kubelet[2180]: I0904 17:43:36.794206 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:43:36.794792 kubelet[2180]: E0904 17:43:36.794672 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 4 17:43:36.961930 kubelet[2180]: W0904 17:43:36.961789 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.961930 kubelet[2180]: E0904 17:43:36.961865 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:36.967060 kubelet[2180]: E0904 17:43:36.966958 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21b798b68ac21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:43:36.080206881 +0000 UTC m=+1.012868275,LastTimestamp:2024-09-04 17:43:36.080206881 +0000 UTC m=+1.012868275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:43:37.086955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703489726.mount: Deactivated successfully. Sep 4 17:43:37.096314 containerd[1439]: time="2024-09-04T17:43:37.096214328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:43:37.097768 containerd[1439]: time="2024-09-04T17:43:37.097722044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:43:37.098961 containerd[1439]: time="2024-09-04T17:43:37.098884525Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:43:37.100829 containerd[1439]: time="2024-09-04T17:43:37.100713145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:43:37.103302 containerd[1439]: time="2024-09-04T17:43:37.101778063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:43:37.103433 containerd[1439]: time="2024-09-04T17:43:37.102298336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:43:37.103433 containerd[1439]: time="2024-09-04T17:43:37.103169487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:43:37.106182 containerd[1439]: time="2024-09-04T17:43:37.106129974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:43:37.107364 containerd[1439]: time="2024-09-04T17:43:37.107314986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.27136ms" Sep 4 17:43:37.108383 containerd[1439]: time="2024-09-04T17:43:37.108236559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.363781ms" Sep 4 17:43:37.108991 containerd[1439]: time="2024-09-04T17:43:37.108855957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.83179ms" Sep 4 17:43:37.244188 containerd[1439]: time="2024-09-04T17:43:37.243972671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:37.244188 containerd[1439]: time="2024-09-04T17:43:37.244031738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:37.244188 containerd[1439]: time="2024-09-04T17:43:37.244047225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:37.244188 containerd[1439]: time="2024-09-04T17:43:37.244138226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:37.245375 kubelet[2180]: W0904 17:43:37.245310 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:37.245651 kubelet[2180]: E0904 17:43:37.245555 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:37.246566 containerd[1439]: time="2024-09-04T17:43:37.245955641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:37.246566 containerd[1439]: time="2024-09-04T17:43:37.246107069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:37.246663 containerd[1439]: time="2024-09-04T17:43:37.246554109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:37.246663 containerd[1439]: time="2024-09-04T17:43:37.246391676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:37.246663 containerd[1439]: time="2024-09-04T17:43:37.246486119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:37.246663 containerd[1439]: time="2024-09-04T17:43:37.246504287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:37.246663 containerd[1439]: time="2024-09-04T17:43:37.246582442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:37.246663 containerd[1439]: time="2024-09-04T17:43:37.246643949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:37.264600 systemd[1]: Started cri-containerd-48e34bcc028fa6da96d495a2728a62b8908efe7ce8e2ad4fe1390f65816d0035.scope - libcontainer container 48e34bcc028fa6da96d495a2728a62b8908efe7ce8e2ad4fe1390f65816d0035. Sep 4 17:43:37.265808 systemd[1]: Started cri-containerd-a027248e56daf837d79f16d60880f8e06bdccee01afb48ba575880dfc7026fe9.scope - libcontainer container a027248e56daf837d79f16d60880f8e06bdccee01afb48ba575880dfc7026fe9. Sep 4 17:43:37.267835 kubelet[2180]: W0904 17:43:37.267760 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:37.267835 kubelet[2180]: E0904 17:43:37.267798 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:37.269333 systemd[1]: Started cri-containerd-c53802942fe596f61e081c4f3ec21756eacd985c642a2559847f5d902a2d7832.scope - libcontainer container c53802942fe596f61e081c4f3ec21756eacd985c642a2559847f5d902a2d7832. Sep 4 17:43:37.295031 containerd[1439]: time="2024-09-04T17:43:37.294981227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"48e34bcc028fa6da96d495a2728a62b8908efe7ce8e2ad4fe1390f65816d0035\"" Sep 4 17:43:37.296872 kubelet[2180]: E0904 17:43:37.296837 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:37.301649 containerd[1439]: time="2024-09-04T17:43:37.301611360Z" level=info msg="CreateContainer within sandbox \"48e34bcc028fa6da96d495a2728a62b8908efe7ce8e2ad4fe1390f65816d0035\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:43:37.302894 containerd[1439]: time="2024-09-04T17:43:37.302867924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a027248e56daf837d79f16d60880f8e06bdccee01afb48ba575880dfc7026fe9\"" Sep 4 17:43:37.303906 kubelet[2180]: E0904 17:43:37.303816 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:37.306201 containerd[1439]: time="2024-09-04T17:43:37.306153797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbd463756fc7a198a6b79d51b5f0ac0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c53802942fe596f61e081c4f3ec21756eacd985c642a2559847f5d902a2d7832\"" Sep 4 17:43:37.306783 containerd[1439]: time="2024-09-04T17:43:37.306676592Z" level=info msg="CreateContainer within sandbox \"a027248e56daf837d79f16d60880f8e06bdccee01afb48ba575880dfc7026fe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:43:37.306905 kubelet[2180]: E0904 17:43:37.306738 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:37.308848 containerd[1439]: time="2024-09-04T17:43:37.308810709Z" level=info msg="CreateContainer within sandbox \"c53802942fe596f61e081c4f3ec21756eacd985c642a2559847f5d902a2d7832\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:43:37.319905 containerd[1439]: time="2024-09-04T17:43:37.319841095Z" level=info msg="CreateContainer within sandbox \"48e34bcc028fa6da96d495a2728a62b8908efe7ce8e2ad4fe1390f65816d0035\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de1f66c4d142c2f683463f09c2b269c0fafd89c83e5e03e3511816e2702be67a\"" Sep 4 17:43:37.320498 containerd[1439]: time="2024-09-04T17:43:37.320466936Z" level=info msg="StartContainer for \"de1f66c4d142c2f683463f09c2b269c0fafd89c83e5e03e3511816e2702be67a\"" Sep 4 17:43:37.325722 containerd[1439]: time="2024-09-04T17:43:37.325645498Z" level=info msg="CreateContainer within sandbox \"c53802942fe596f61e081c4f3ec21756eacd985c642a2559847f5d902a2d7832\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9d68732a0ae54d45df0f0ab50a72d47ca6e6df44779e5ebf2a454d75cb85d688\"" Sep 4 17:43:37.326300 containerd[1439]: time="2024-09-04T17:43:37.326266417Z" level=info msg="StartContainer for \"9d68732a0ae54d45df0f0ab50a72d47ca6e6df44779e5ebf2a454d75cb85d688\"" Sep 4 17:43:37.326571 containerd[1439]: time="2024-09-04T17:43:37.326281023Z" level=info msg="CreateContainer within sandbox \"a027248e56daf837d79f16d60880f8e06bdccee01afb48ba575880dfc7026fe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce6d550ea9cc4f5509418d5ba666ececf929678850dbd27357454ae6304e9da4\"" Sep 4 17:43:37.327040 containerd[1439]: time="2024-09-04T17:43:37.326978416Z" level=info msg="StartContainer for \"ce6d550ea9cc4f5509418d5ba666ececf929678850dbd27357454ae6304e9da4\"" Sep 4 17:43:37.344577 systemd[1]: Started cri-containerd-de1f66c4d142c2f683463f09c2b269c0fafd89c83e5e03e3511816e2702be67a.scope - libcontainer container de1f66c4d142c2f683463f09c2b269c0fafd89c83e5e03e3511816e2702be67a. Sep 4 17:43:37.356569 systemd[1]: Started cri-containerd-9d68732a0ae54d45df0f0ab50a72d47ca6e6df44779e5ebf2a454d75cb85d688.scope - libcontainer container 9d68732a0ae54d45df0f0ab50a72d47ca6e6df44779e5ebf2a454d75cb85d688. Sep 4 17:43:37.357839 systemd[1]: Started cri-containerd-ce6d550ea9cc4f5509418d5ba666ececf929678850dbd27357454ae6304e9da4.scope - libcontainer container ce6d550ea9cc4f5509418d5ba666ececf929678850dbd27357454ae6304e9da4. Sep 4 17:43:37.383387 containerd[1439]: time="2024-09-04T17:43:37.383053724Z" level=info msg="StartContainer for \"de1f66c4d142c2f683463f09c2b269c0fafd89c83e5e03e3511816e2702be67a\" returns successfully" Sep 4 17:43:37.429900 containerd[1439]: time="2024-09-04T17:43:37.429851551Z" level=info msg="StartContainer for \"ce6d550ea9cc4f5509418d5ba666ececf929678850dbd27357454ae6304e9da4\" returns successfully" Sep 4 17:43:37.429900 containerd[1439]: time="2024-09-04T17:43:37.429851591Z" level=info msg="StartContainer for \"9d68732a0ae54d45df0f0ab50a72d47ca6e6df44779e5ebf2a454d75cb85d688\" returns successfully" Sep 4 17:43:37.492875 kubelet[2180]: E0904 17:43:37.492815 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Sep 4 17:43:37.518857 kubelet[2180]: W0904 17:43:37.518706 2180 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:37.518857 kubelet[2180]: E0904 17:43:37.518777 2180 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Sep 4 17:43:37.599578 kubelet[2180]: I0904 17:43:37.599141 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:43:37.600323 kubelet[2180]: E0904 17:43:37.600287 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 4 17:43:38.116016 kubelet[2180]: E0904 17:43:38.115877 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:38.118958 kubelet[2180]: E0904 17:43:38.118638 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:38.121220 kubelet[2180]: E0904 17:43:38.121142 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:39.126859 kubelet[2180]: E0904 17:43:39.126803 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:39.188017 kubelet[2180]: E0904 17:43:39.187982 2180 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:43:39.202480 kubelet[2180]: I0904 17:43:39.202152 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:43:39.330139 kubelet[2180]: I0904 17:43:39.329884 2180 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:43:40.083464 kubelet[2180]: I0904 17:43:40.083425 2180 apiserver.go:52] "Watching apiserver" Sep 4 17:43:40.089999 kubelet[2180]: I0904 17:43:40.089944 2180 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:43:41.040107 systemd[1]: Reloading requested from client PID 2453 ('systemctl') (unit session-7.scope)... Sep 4 17:43:41.040124 systemd[1]: Reloading... Sep 4 17:43:41.112435 zram_generator::config[2490]: No configuration found. Sep 4 17:43:41.265051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:43:41.328684 systemd[1]: Reloading finished in 287 ms. Sep 4 17:43:41.362285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:41.362558 kubelet[2180]: I0904 17:43:41.362536 2180 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:43:41.378520 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:43:41.379490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:41.379554 systemd[1]: kubelet.service: Consumed 1.365s CPU time, 114.2M memory peak, 0B memory swap peak. Sep 4 17:43:41.388744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:43:41.480070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:43:41.484303 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:43:41.526171 kubelet[2532]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:43:41.526171 kubelet[2532]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:43:41.526171 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:43:41.526537 kubelet[2532]: I0904 17:43:41.526211 2532 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:43:41.530782 kubelet[2532]: I0904 17:43:41.530751 2532 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:43:41.531509 kubelet[2532]: I0904 17:43:41.530874 2532 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:43:41.531509 kubelet[2532]: I0904 17:43:41.531049 2532 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:43:41.532335 kubelet[2532]: I0904 17:43:41.532318 2532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:43:41.533544 kubelet[2532]: I0904 17:43:41.533455 2532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:43:41.538098 kubelet[2532]: I0904 17:43:41.538078 2532 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:43:41.538273 kubelet[2532]: I0904 17:43:41.538251 2532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:43:41.538450 kubelet[2532]: I0904 17:43:41.538277 2532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:43:41.538563 kubelet[2532]: I0904 17:43:41.538456 2532 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:43:41.538563 kubelet[2532]: I0904 17:43:41.538465 2532 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:43:41.538563 kubelet[2532]: I0904 17:43:41.538505 2532 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:43:41.538645 kubelet[2532]: I0904 17:43:41.538601 2532 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:43:41.538645 kubelet[2532]: I0904 17:43:41.538612 2532 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:43:41.538645 kubelet[2532]: I0904 17:43:41.538639 2532 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:43:41.539002 kubelet[2532]: I0904 17:43:41.538651 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:43:41.541040 kubelet[2532]: I0904 17:43:41.539910 2532 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:43:41.541040 kubelet[2532]: I0904 17:43:41.540060 2532 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:43:41.541040 kubelet[2532]: I0904 17:43:41.540388 2532 server.go:1264] "Started kubelet" Sep 4 17:43:41.543898 kubelet[2532]: I0904 17:43:41.541828 2532 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:43:41.543898 kubelet[2532]: I0904 17:43:41.542781 2532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:43:41.543898 kubelet[2532]: I0904 17:43:41.542951 2532 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:43:41.545834 kubelet[2532]: I0904 17:43:41.545808 2532 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:43:41.547431 kubelet[2532]: I0904 17:43:41.546358 2532 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:43:41.547431 kubelet[2532]: I0904 17:43:41.546563 2532 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:43:41.557218 kubelet[2532]: I0904 17:43:41.557159 2532 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:43:41.557383 kubelet[2532]: I0904 17:43:41.557359 2532 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:43:41.557646 kubelet[2532]: I0904 17:43:41.557621 2532 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:43:41.557803 kubelet[2532]: I0904 17:43:41.557790 2532 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:43:41.558335 kubelet[2532]: I0904 17:43:41.558308 2532 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:43:41.566441 kubelet[2532]: E0904 17:43:41.566393 2532 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:43:41.567296 kubelet[2532]: I0904 17:43:41.567269 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:43:41.568572 kubelet[2532]: I0904 17:43:41.568540 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:43:41.568656 kubelet[2532]: I0904 17:43:41.568576 2532 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:43:41.568656 kubelet[2532]: I0904 17:43:41.568647 2532 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:43:41.568734 kubelet[2532]: E0904 17:43:41.568705 2532 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:43:41.591872 kubelet[2532]: I0904 17:43:41.591780 2532 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:43:41.592237 kubelet[2532]: I0904 17:43:41.591991 2532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:43:41.592237 kubelet[2532]: I0904 17:43:41.592015 2532 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:43:41.592237 kubelet[2532]: I0904 17:43:41.592165 2532 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:43:41.592237 kubelet[2532]: I0904 17:43:41.592178 2532 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:43:41.592237 kubelet[2532]: I0904 17:43:41.592195 2532 policy_none.go:49] "None policy: Start" Sep 4 17:43:41.593115 kubelet[2532]: I0904 17:43:41.593094 2532 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:43:41.593227 kubelet[2532]: I0904 17:43:41.593216 2532 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:43:41.593471 kubelet[2532]: I0904 17:43:41.593393 2532 state_mem.go:75] "Updated machine memory state" Sep 4 17:43:41.598468 kubelet[2532]: I0904 17:43:41.597224 2532 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:43:41.598468 kubelet[2532]: I0904 17:43:41.597390 2532 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:43:41.598468 kubelet[2532]: I0904 17:43:41.597523 2532 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:43:41.650243 kubelet[2532]: I0904 17:43:41.650196 2532 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:43:41.656498 kubelet[2532]: I0904 17:43:41.656430 2532 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:43:41.656600 kubelet[2532]: I0904 17:43:41.656510 2532 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:43:41.669612 kubelet[2532]: I0904 17:43:41.669559 2532 topology_manager.go:215] "Topology Admit Handler" podUID="fbd463756fc7a198a6b79d51b5f0ac0f" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:43:41.669716 kubelet[2532]: I0904 17:43:41.669705 2532 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:43:41.669757 kubelet[2532]: I0904 17:43:41.669743 2532 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:43:41.848611 kubelet[2532]: I0904 17:43:41.847979 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbd463756fc7a198a6b79d51b5f0ac0f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbd463756fc7a198a6b79d51b5f0ac0f\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:41.848611 kubelet[2532]: I0904 17:43:41.848023 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:41.848611 kubelet[2532]: I0904 17:43:41.848046 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:41.848611 kubelet[2532]: I0904 17:43:41.848063 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:41.848611 kubelet[2532]: I0904 17:43:41.848081 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbd463756fc7a198a6b79d51b5f0ac0f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbd463756fc7a198a6b79d51b5f0ac0f\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:41.848811 kubelet[2532]: I0904 17:43:41.848095 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbd463756fc7a198a6b79d51b5f0ac0f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbd463756fc7a198a6b79d51b5f0ac0f\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:41.848811 kubelet[2532]: I0904 17:43:41.848110 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:41.848811 kubelet[2532]: I0904 17:43:41.848126 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:43:41.848811 kubelet[2532]: I0904 17:43:41.848142 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:43:41.992531 kubelet[2532]: E0904 17:43:41.992491 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:41.994233 kubelet[2532]: E0904 17:43:41.994135 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:41.994233 kubelet[2532]: E0904 17:43:41.994171 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:42.035565 sudo[2568]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:43:42.036190 sudo[2568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 17:43:42.463249 sudo[2568]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:42.539338 kubelet[2532]: I0904 17:43:42.539295 2532 apiserver.go:52] "Watching apiserver" Sep 4 17:43:42.547392 kubelet[2532]: I0904 17:43:42.546487 2532 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:43:42.582595 kubelet[2532]: E0904 17:43:42.581165 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:42.582595 kubelet[2532]: E0904 17:43:42.581651 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:42.587817 kubelet[2532]: E0904 17:43:42.587771 2532 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:43:42.588195 kubelet[2532]: E0904 17:43:42.588169 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:42.604695 kubelet[2532]: I0904 17:43:42.604644 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.604604913 podStartE2EDuration="1.604604913s" podCreationTimestamp="2024-09-04 17:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:43:42.604477797 +0000 UTC m=+1.116311192" watchObservedRunningTime="2024-09-04 17:43:42.604604913 +0000 UTC m=+1.116438308" Sep 4 17:43:42.619208 kubelet[2532]: I0904 17:43:42.619020 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.61900435 podStartE2EDuration="1.61900435s" podCreationTimestamp="2024-09-04 17:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:43:42.61875788 +0000 UTC m=+1.130591235" watchObservedRunningTime="2024-09-04 17:43:42.61900435 +0000 UTC m=+1.130837745" Sep 4 17:43:42.619403 kubelet[2532]: I0904 17:43:42.619343 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.619335005 podStartE2EDuration="1.619335005s" podCreationTimestamp="2024-09-04 17:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:43:42.612239416 +0000 UTC m=+1.124072851" watchObservedRunningTime="2024-09-04 17:43:42.619335005 +0000 UTC m=+1.131168400" Sep 4 17:43:43.583235 kubelet[2532]: E0904 17:43:43.583183 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:43.922342 kubelet[2532]: E0904 17:43:43.922196 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:44.277079 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 4 17:43:44.278564 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:44.282949 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:47880.service: Deactivated successfully. Sep 4 17:43:44.285302 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:43:44.285699 systemd[1]: session-7.scope: Consumed 7.716s CPU time, 139.9M memory peak, 0B memory swap peak. Sep 4 17:43:44.286779 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:43:44.288340 systemd-logind[1415]: Removed session 7. Sep 4 17:43:44.584283 kubelet[2532]: E0904 17:43:44.584123 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:46.888090 kubelet[2532]: E0904 17:43:46.888017 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:52.132481 update_engine[1416]: I0904 17:43:52.132387 1416 update_attempter.cc:509] Updating boot flags... Sep 4 17:43:52.167954 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2617) Sep 4 17:43:52.215419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2620) Sep 4 17:43:52.740894 kubelet[2532]: E0904 17:43:52.740821 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:53.597964 kubelet[2532]: E0904 17:43:53.597927 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:53.930396 kubelet[2532]: E0904 17:43:53.929903 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:56.807417 kubelet[2532]: I0904 17:43:56.807370 2532 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:43:56.807827 containerd[1439]: time="2024-09-04T17:43:56.807784248Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:43:56.808090 kubelet[2532]: I0904 17:43:56.807997 2532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:43:56.895427 kubelet[2532]: E0904 17:43:56.895355 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:57.700672 kubelet[2532]: I0904 17:43:57.700231 2532 topology_manager.go:215] "Topology Admit Handler" podUID="a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736" podNamespace="kube-system" podName="kube-proxy-2kn5x" Sep 4 17:43:57.712310 kubelet[2532]: I0904 17:43:57.712194 2532 topology_manager.go:215] "Topology Admit Handler" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" podNamespace="kube-system" podName="cilium-8964b" Sep 4 17:43:57.715899 systemd[1]: Created slice kubepods-besteffort-poda4eb0318_6b1a_4bbe_a2a2_1cbeb13b7736.slice - libcontainer container kubepods-besteffort-poda4eb0318_6b1a_4bbe_a2a2_1cbeb13b7736.slice. Sep 4 17:43:57.744854 systemd[1]: Created slice kubepods-burstable-pod8af1ef57_ea69_4dee_9393_adb6f0a9b7a0.slice - libcontainer container kubepods-burstable-pod8af1ef57_ea69_4dee_9393_adb6f0a9b7a0.slice. Sep 4 17:43:57.751527 kubelet[2532]: I0904 17:43:57.750868 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-xtables-lock\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751527 kubelet[2532]: I0904 17:43:57.750915 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cni-path\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751527 kubelet[2532]: I0904 17:43:57.750936 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-bpf-maps\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751527 kubelet[2532]: I0904 17:43:57.750974 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-cgroup\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751527 kubelet[2532]: I0904 17:43:57.751015 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-etc-cni-netd\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751527 kubelet[2532]: I0904 17:43:57.751033 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-lib-modules\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751823 kubelet[2532]: I0904 17:43:57.751051 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-run\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751823 kubelet[2532]: I0904 17:43:57.751068 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hostproc\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751823 kubelet[2532]: I0904 17:43:57.751113 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-clustermesh-secrets\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.751823 kubelet[2532]: I0904 17:43:57.751129 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-config-path\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.852336 kubelet[2532]: I0904 17:43:57.851854 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-net\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.852336 kubelet[2532]: I0904 17:43:57.851947 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hubble-tls\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.852336 kubelet[2532]: I0904 17:43:57.852090 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736-xtables-lock\") pod \"kube-proxy-2kn5x\" (UID: \"a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736\") " pod="kube-system/kube-proxy-2kn5x" Sep 4 17:43:57.852336 kubelet[2532]: I0904 17:43:57.852154 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-kernel\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.852336 kubelet[2532]: I0904 17:43:57.852171 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736-lib-modules\") pod \"kube-proxy-2kn5x\" (UID: \"a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736\") " pod="kube-system/kube-proxy-2kn5x" Sep 4 17:43:57.853842 kubelet[2532]: I0904 17:43:57.852189 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fjs\" (UniqueName: \"kubernetes.io/projected/a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736-kube-api-access-58fjs\") pod \"kube-proxy-2kn5x\" (UID: \"a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736\") " pod="kube-system/kube-proxy-2kn5x" Sep 4 17:43:57.853842 kubelet[2532]: I0904 17:43:57.852218 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736-kube-proxy\") pod \"kube-proxy-2kn5x\" (UID: \"a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736\") " pod="kube-system/kube-proxy-2kn5x" Sep 4 17:43:57.853842 kubelet[2532]: I0904 17:43:57.852249 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9z58\" (UniqueName: \"kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-kube-api-access-d9z58\") pod \"cilium-8964b\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " pod="kube-system/cilium-8964b" Sep 4 17:43:57.867960 kubelet[2532]: I0904 17:43:57.865761 2532 topology_manager.go:215] "Topology Admit Handler" podUID="44048fb6-256a-41d3-8bee-d1640f488f64" podNamespace="kube-system" podName="cilium-operator-599987898-jkdh9" Sep 4 17:43:57.881996 systemd[1]: Created slice kubepods-besteffort-pod44048fb6_256a_41d3_8bee_d1640f488f64.slice - libcontainer container kubepods-besteffort-pod44048fb6_256a_41d3_8bee_d1640f488f64.slice. Sep 4 17:43:57.953342 kubelet[2532]: I0904 17:43:57.953216 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44048fb6-256a-41d3-8bee-d1640f488f64-cilium-config-path\") pod \"cilium-operator-599987898-jkdh9\" (UID: \"44048fb6-256a-41d3-8bee-d1640f488f64\") " pod="kube-system/cilium-operator-599987898-jkdh9" Sep 4 17:43:57.953487 kubelet[2532]: I0904 17:43:57.953364 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lkkb\" (UniqueName: \"kubernetes.io/projected/44048fb6-256a-41d3-8bee-d1640f488f64-kube-api-access-8lkkb\") pod \"cilium-operator-599987898-jkdh9\" (UID: \"44048fb6-256a-41d3-8bee-d1640f488f64\") " pod="kube-system/cilium-operator-599987898-jkdh9" Sep 4 17:43:58.042627 kubelet[2532]: E0904 17:43:58.042528 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.043706 containerd[1439]: time="2024-09-04T17:43:58.043660711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kn5x,Uid:a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:58.051308 kubelet[2532]: E0904 17:43:58.051024 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.052716 containerd[1439]: time="2024-09-04T17:43:58.052647884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8964b,Uid:8af1ef57-ea69-4dee-9393-adb6f0a9b7a0,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:58.073117 containerd[1439]: time="2024-09-04T17:43:58.073022132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:58.073117 containerd[1439]: time="2024-09-04T17:43:58.073070458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:58.073117 containerd[1439]: time="2024-09-04T17:43:58.073086740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:58.073584 containerd[1439]: time="2024-09-04T17:43:58.073515914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:58.075468 containerd[1439]: time="2024-09-04T17:43:58.075380070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:58.075540 containerd[1439]: time="2024-09-04T17:43:58.075431676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:58.075707 containerd[1439]: time="2024-09-04T17:43:58.075523648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:58.076104 containerd[1439]: time="2024-09-04T17:43:58.076058595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:58.097546 systemd[1]: Started cri-containerd-45c55587a1d8bda4ee1d7a23e2a20ae142d4a452cab7a95e2553a2b9e29af54b.scope - libcontainer container 45c55587a1d8bda4ee1d7a23e2a20ae142d4a452cab7a95e2553a2b9e29af54b. Sep 4 17:43:58.098628 systemd[1]: Started cri-containerd-bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800.scope - libcontainer container bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800. Sep 4 17:43:58.121978 containerd[1439]: time="2024-09-04T17:43:58.121820404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kn5x,Uid:a4eb0318-6b1a-4bbe-a2a2-1cbeb13b7736,Namespace:kube-system,Attempt:0,} returns sandbox id \"45c55587a1d8bda4ee1d7a23e2a20ae142d4a452cab7a95e2553a2b9e29af54b\"" Sep 4 17:43:58.122606 kubelet[2532]: E0904 17:43:58.122581 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.127122 containerd[1439]: time="2024-09-04T17:43:58.127081028Z" level=info msg="CreateContainer within sandbox \"45c55587a1d8bda4ee1d7a23e2a20ae142d4a452cab7a95e2553a2b9e29af54b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:43:58.130633 containerd[1439]: time="2024-09-04T17:43:58.128504047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8964b,Uid:8af1ef57-ea69-4dee-9393-adb6f0a9b7a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\"" Sep 4 17:43:58.132511 kubelet[2532]: E0904 17:43:58.131260 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.145719 containerd[1439]: time="2024-09-04T17:43:58.145662530Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:43:58.151232 containerd[1439]: time="2024-09-04T17:43:58.151098856Z" level=info msg="CreateContainer within sandbox \"45c55587a1d8bda4ee1d7a23e2a20ae142d4a452cab7a95e2553a2b9e29af54b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"262e72a47b6462c6991f9e437ca327cfefa53018df97e18c24cd46d63ccbd8c0\"" Sep 4 17:43:58.152133 containerd[1439]: time="2024-09-04T17:43:58.151900957Z" level=info msg="StartContainer for \"262e72a47b6462c6991f9e437ca327cfefa53018df97e18c24cd46d63ccbd8c0\"" Sep 4 17:43:58.178652 systemd[1]: Started cri-containerd-262e72a47b6462c6991f9e437ca327cfefa53018df97e18c24cd46d63ccbd8c0.scope - libcontainer container 262e72a47b6462c6991f9e437ca327cfefa53018df97e18c24cd46d63ccbd8c0. Sep 4 17:43:58.187468 kubelet[2532]: E0904 17:43:58.187431 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.188785 containerd[1439]: time="2024-09-04T17:43:58.188089639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jkdh9,Uid:44048fb6-256a-41d3-8bee-d1640f488f64,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:58.204543 containerd[1439]: time="2024-09-04T17:43:58.204177867Z" level=info msg="StartContainer for \"262e72a47b6462c6991f9e437ca327cfefa53018df97e18c24cd46d63ccbd8c0\" returns successfully" Sep 4 17:43:58.219005 containerd[1439]: time="2024-09-04T17:43:58.212582407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:58.219005 containerd[1439]: time="2024-09-04T17:43:58.213270574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:58.219005 containerd[1439]: time="2024-09-04T17:43:58.213287016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:58.219005 containerd[1439]: time="2024-09-04T17:43:58.213391669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:58.242657 systemd[1]: Started cri-containerd-ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab.scope - libcontainer container ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab. Sep 4 17:43:58.277113 containerd[1439]: time="2024-09-04T17:43:58.277061816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jkdh9,Uid:44048fb6-256a-41d3-8bee-d1640f488f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab\"" Sep 4 17:43:58.277983 kubelet[2532]: E0904 17:43:58.277957 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.608123 kubelet[2532]: E0904 17:43:58.608005 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:43:58.617717 kubelet[2532]: I0904 17:43:58.617010 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2kn5x" podStartSLOduration=1.6169933520000002 podStartE2EDuration="1.616993352s" podCreationTimestamp="2024-09-04 17:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:43:58.616930744 +0000 UTC m=+17.128764139" watchObservedRunningTime="2024-09-04 17:43:58.616993352 +0000 UTC m=+17.128826747" Sep 4 17:44:04.531607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093266543.mount: Deactivated successfully. Sep 4 17:44:05.730567 containerd[1439]: time="2024-09-04T17:44:05.730516906Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:05.731004 containerd[1439]: time="2024-09-04T17:44:05.730975789Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651546" Sep 4 17:44:05.731952 containerd[1439]: time="2024-09-04T17:44:05.731906036Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:05.733576 containerd[1439]: time="2024-09-04T17:44:05.733470663Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.587746366s" Sep 4 17:44:05.733576 containerd[1439]: time="2024-09-04T17:44:05.733509187Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 17:44:05.737248 containerd[1439]: time="2024-09-04T17:44:05.737077402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:44:05.737795 containerd[1439]: time="2024-09-04T17:44:05.737763947Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:44:05.761424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768742794.mount: Deactivated successfully. Sep 4 17:44:05.762960 containerd[1439]: time="2024-09-04T17:44:05.762926510Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812\"" Sep 4 17:44:05.763519 containerd[1439]: time="2024-09-04T17:44:05.763480482Z" level=info msg="StartContainer for \"031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812\"" Sep 4 17:44:05.799611 systemd[1]: Started cri-containerd-031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812.scope - libcontainer container 031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812. Sep 4 17:44:05.878176 containerd[1439]: time="2024-09-04T17:44:05.878120452Z" level=info msg="StartContainer for \"031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812\" returns successfully" Sep 4 17:44:05.886555 systemd[1]: cri-containerd-031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812.scope: Deactivated successfully. Sep 4 17:44:06.024267 containerd[1439]: time="2024-09-04T17:44:06.015104075Z" level=info msg="shim disconnected" id=031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812 namespace=k8s.io Sep 4 17:44:06.024267 containerd[1439]: time="2024-09-04T17:44:06.024200137Z" level=warning msg="cleaning up after shim disconnected" id=031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812 namespace=k8s.io Sep 4 17:44:06.024267 containerd[1439]: time="2024-09-04T17:44:06.024217059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:06.635878 kubelet[2532]: E0904 17:44:06.635838 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:06.638948 containerd[1439]: time="2024-09-04T17:44:06.638900880Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:44:06.650011 containerd[1439]: time="2024-09-04T17:44:06.649964081Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a\"" Sep 4 17:44:06.650424 containerd[1439]: time="2024-09-04T17:44:06.650384199Z" level=info msg="StartContainer for \"939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a\"" Sep 4 17:44:06.688552 systemd[1]: Started cri-containerd-939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a.scope - libcontainer container 939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a. Sep 4 17:44:06.711695 containerd[1439]: time="2024-09-04T17:44:06.711557690Z" level=info msg="StartContainer for \"939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a\" returns successfully" Sep 4 17:44:06.726347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:44:06.726808 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:44:06.726879 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:44:06.732672 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:44:06.732835 systemd[1]: cri-containerd-939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a.scope: Deactivated successfully. Sep 4 17:44:06.760317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812-rootfs.mount: Deactivated successfully. Sep 4 17:44:06.772033 containerd[1439]: time="2024-09-04T17:44:06.771971913Z" level=info msg="shim disconnected" id=939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a namespace=k8s.io Sep 4 17:44:06.772227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851575452.mount: Deactivated successfully. Sep 4 17:44:06.773867 containerd[1439]: time="2024-09-04T17:44:06.772563847Z" level=warning msg="cleaning up after shim disconnected" id=939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a namespace=k8s.io Sep 4 17:44:06.773867 containerd[1439]: time="2024-09-04T17:44:06.773216986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:06.775543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:44:07.071759 containerd[1439]: time="2024-09-04T17:44:07.071697743Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:07.073003 containerd[1439]: time="2024-09-04T17:44:07.072967654Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138338" Sep 4 17:44:07.074059 containerd[1439]: time="2024-09-04T17:44:07.074021226Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:44:07.075408 containerd[1439]: time="2024-09-04T17:44:07.075102720Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.337916308s" Sep 4 17:44:07.075408 containerd[1439]: time="2024-09-04T17:44:07.075139803Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 17:44:07.077418 containerd[1439]: time="2024-09-04T17:44:07.077307792Z" level=info msg="CreateContainer within sandbox \"ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:44:07.088832 containerd[1439]: time="2024-09-04T17:44:07.088784992Z" level=info msg="CreateContainer within sandbox \"ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\"" Sep 4 17:44:07.090440 containerd[1439]: time="2024-09-04T17:44:07.089423448Z" level=info msg="StartContainer for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\"" Sep 4 17:44:07.116582 systemd[1]: Started cri-containerd-97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9.scope - libcontainer container 97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9. Sep 4 17:44:07.138766 containerd[1439]: time="2024-09-04T17:44:07.138728023Z" level=info msg="StartContainer for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" returns successfully" Sep 4 17:44:07.639857 kubelet[2532]: E0904 17:44:07.639808 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:07.646201 containerd[1439]: time="2024-09-04T17:44:07.645909332Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:44:07.660475 kubelet[2532]: E0904 17:44:07.660438 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:07.682313 containerd[1439]: time="2024-09-04T17:44:07.682250458Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984\"" Sep 4 17:44:07.685707 containerd[1439]: time="2024-09-04T17:44:07.685664196Z" level=info msg="StartContainer for \"d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984\"" Sep 4 17:44:07.723589 systemd[1]: Started cri-containerd-d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984.scope - libcontainer container d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984. Sep 4 17:44:07.753451 containerd[1439]: time="2024-09-04T17:44:07.753413618Z" level=info msg="StartContainer for \"d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984\" returns successfully" Sep 4 17:44:07.772255 systemd[1]: cri-containerd-d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984.scope: Deactivated successfully. Sep 4 17:44:07.795481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984-rootfs.mount: Deactivated successfully. Sep 4 17:44:07.843833 containerd[1439]: time="2024-09-04T17:44:07.843653241Z" level=info msg="shim disconnected" id=d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984 namespace=k8s.io Sep 4 17:44:07.843833 containerd[1439]: time="2024-09-04T17:44:07.843725967Z" level=warning msg="cleaning up after shim disconnected" id=d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984 namespace=k8s.io Sep 4 17:44:07.843833 containerd[1439]: time="2024-09-04T17:44:07.843745209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:08.664291 kubelet[2532]: E0904 17:44:08.664133 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:08.670249 kubelet[2532]: E0904 17:44:08.664430 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:08.670287 containerd[1439]: time="2024-09-04T17:44:08.667021683Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:44:08.682393 kubelet[2532]: I0904 17:44:08.681155 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-jkdh9" podStartSLOduration=2.884120527 podStartE2EDuration="11.681137629s" podCreationTimestamp="2024-09-04 17:43:57 +0000 UTC" firstStartedPulling="2024-09-04 17:43:58.278941413 +0000 UTC m=+16.790774768" lastFinishedPulling="2024-09-04 17:44:07.075958475 +0000 UTC m=+25.587791870" observedRunningTime="2024-09-04 17:44:07.679662273 +0000 UTC m=+26.191495668" watchObservedRunningTime="2024-09-04 17:44:08.681137629 +0000 UTC m=+27.192971024" Sep 4 17:44:08.684204 containerd[1439]: time="2024-09-04T17:44:08.684158843Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b\"" Sep 4 17:44:08.684667 containerd[1439]: time="2024-09-04T17:44:08.684646604Z" level=info msg="StartContainer for \"88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b\"" Sep 4 17:44:08.712546 systemd[1]: Started cri-containerd-88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b.scope - libcontainer container 88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b. Sep 4 17:44:08.730032 systemd[1]: cri-containerd-88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b.scope: Deactivated successfully. Sep 4 17:44:08.731514 containerd[1439]: time="2024-09-04T17:44:08.731379611Z" level=info msg="StartContainer for \"88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b\" returns successfully" Sep 4 17:44:08.764912 containerd[1439]: time="2024-09-04T17:44:08.764845423Z" level=info msg="shim disconnected" id=88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b namespace=k8s.io Sep 4 17:44:08.764912 containerd[1439]: time="2024-09-04T17:44:08.764897388Z" level=warning msg="cleaning up after shim disconnected" id=88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b namespace=k8s.io Sep 4 17:44:08.764912 containerd[1439]: time="2024-09-04T17:44:08.764909429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:09.675601 kubelet[2532]: E0904 17:44:09.675383 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:09.677408 containerd[1439]: time="2024-09-04T17:44:09.677346746Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:44:09.694357 containerd[1439]: time="2024-09-04T17:44:09.694258719Z" level=info msg="CreateContainer within sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\"" Sep 4 17:44:09.695455 containerd[1439]: time="2024-09-04T17:44:09.694650350Z" level=info msg="StartContainer for \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\"" Sep 4 17:44:09.722535 systemd[1]: Started cri-containerd-69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3.scope - libcontainer container 69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3. Sep 4 17:44:09.751218 containerd[1439]: time="2024-09-04T17:44:09.751084129Z" level=info msg="StartContainer for \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\" returns successfully" Sep 4 17:44:09.822424 kubelet[2532]: I0904 17:44:09.818456 2532 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:44:09.841098 kubelet[2532]: I0904 17:44:09.841054 2532 topology_manager.go:215] "Topology Admit Handler" podUID="d9dd5217-2a6b-4352-8f67-51086e5c3d44" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fhmz4" Sep 4 17:44:09.841536 kubelet[2532]: I0904 17:44:09.841504 2532 topology_manager.go:215] "Topology Admit Handler" podUID="ebad9464-1642-44b3-a734-13ff3ec27066" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qc5v2" Sep 4 17:44:09.861734 systemd[1]: Created slice kubepods-burstable-podd9dd5217_2a6b_4352_8f67_51086e5c3d44.slice - libcontainer container kubepods-burstable-podd9dd5217_2a6b_4352_8f67_51086e5c3d44.slice. Sep 4 17:44:09.869067 systemd[1]: Created slice kubepods-burstable-podebad9464_1642_44b3_a734_13ff3ec27066.slice - libcontainer container kubepods-burstable-podebad9464_1642_44b3_a734_13ff3ec27066.slice. Sep 4 17:44:09.947427 kubelet[2532]: I0904 17:44:09.947306 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9dd5217-2a6b-4352-8f67-51086e5c3d44-config-volume\") pod \"coredns-7db6d8ff4d-fhmz4\" (UID: \"d9dd5217-2a6b-4352-8f67-51086e5c3d44\") " pod="kube-system/coredns-7db6d8ff4d-fhmz4" Sep 4 17:44:09.947427 kubelet[2532]: I0904 17:44:09.947349 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5ghz\" (UniqueName: \"kubernetes.io/projected/ebad9464-1642-44b3-a734-13ff3ec27066-kube-api-access-r5ghz\") pod \"coredns-7db6d8ff4d-qc5v2\" (UID: \"ebad9464-1642-44b3-a734-13ff3ec27066\") " pod="kube-system/coredns-7db6d8ff4d-qc5v2" Sep 4 17:44:09.947427 kubelet[2532]: I0904 17:44:09.947372 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvnfj\" (UniqueName: \"kubernetes.io/projected/d9dd5217-2a6b-4352-8f67-51086e5c3d44-kube-api-access-hvnfj\") pod \"coredns-7db6d8ff4d-fhmz4\" (UID: \"d9dd5217-2a6b-4352-8f67-51086e5c3d44\") " pod="kube-system/coredns-7db6d8ff4d-fhmz4" Sep 4 17:44:09.947427 kubelet[2532]: I0904 17:44:09.947388 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebad9464-1642-44b3-a734-13ff3ec27066-config-volume\") pod \"coredns-7db6d8ff4d-qc5v2\" (UID: \"ebad9464-1642-44b3-a734-13ff3ec27066\") " pod="kube-system/coredns-7db6d8ff4d-qc5v2" Sep 4 17:44:10.166219 kubelet[2532]: E0904 17:44:10.166182 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:10.167393 containerd[1439]: time="2024-09-04T17:44:10.167359334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fhmz4,Uid:d9dd5217-2a6b-4352-8f67-51086e5c3d44,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:10.172432 kubelet[2532]: E0904 17:44:10.172407 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:10.173149 containerd[1439]: time="2024-09-04T17:44:10.173110625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qc5v2,Uid:ebad9464-1642-44b3-a734-13ff3ec27066,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:10.680698 kubelet[2532]: E0904 17:44:10.680662 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:10.694102 kubelet[2532]: I0904 17:44:10.694043 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8964b" podStartSLOduration=6.0933892610000004 podStartE2EDuration="13.694027235s" podCreationTimestamp="2024-09-04 17:43:57 +0000 UTC" firstStartedPulling="2024-09-04 17:43:58.135838212 +0000 UTC m=+16.647671607" lastFinishedPulling="2024-09-04 17:44:05.736476186 +0000 UTC m=+24.248309581" observedRunningTime="2024-09-04 17:44:10.69332654 +0000 UTC m=+29.205159935" watchObservedRunningTime="2024-09-04 17:44:10.694027235 +0000 UTC m=+29.205860630" Sep 4 17:44:11.605385 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:58140.service - OpenSSH per-connection server daemon (10.0.0.1:58140). Sep 4 17:44:11.649431 sshd[3384]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:11.650164 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:11.653643 systemd-logind[1415]: New session 8 of user core. Sep 4 17:44:11.660599 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:44:11.682072 kubelet[2532]: E0904 17:44:11.682040 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:11.798899 sshd[3384]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:11.805514 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:58140.service: Deactivated successfully. Sep 4 17:44:11.807261 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:44:11.807879 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:44:11.808886 systemd-logind[1415]: Removed session 8. Sep 4 17:44:11.942127 systemd-networkd[1371]: cilium_host: Link UP Sep 4 17:44:11.942241 systemd-networkd[1371]: cilium_net: Link UP Sep 4 17:44:11.942365 systemd-networkd[1371]: cilium_net: Gained carrier Sep 4 17:44:11.942504 systemd-networkd[1371]: cilium_host: Gained carrier Sep 4 17:44:12.042940 systemd-networkd[1371]: cilium_vxlan: Link UP Sep 4 17:44:12.042948 systemd-networkd[1371]: cilium_vxlan: Gained carrier Sep 4 17:44:12.397528 kernel: NET: Registered PF_ALG protocol family Sep 4 17:44:12.667773 systemd-networkd[1371]: cilium_net: Gained IPv6LL Sep 4 17:44:12.668041 systemd-networkd[1371]: cilium_host: Gained IPv6LL Sep 4 17:44:12.683943 kubelet[2532]: E0904 17:44:12.683904 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:12.956471 systemd-networkd[1371]: lxc_health: Link UP Sep 4 17:44:12.966171 systemd-networkd[1371]: lxc_health: Gained carrier Sep 4 17:44:13.292941 systemd-networkd[1371]: lxc406e3a204ee7: Link UP Sep 4 17:44:13.298743 systemd-networkd[1371]: lxc479ad6f17b99: Link UP Sep 4 17:44:13.306444 kernel: eth0: renamed from tmp09535 Sep 4 17:44:13.322841 kernel: eth0: renamed from tmpafcd8 Sep 4 17:44:13.331632 systemd-networkd[1371]: lxc406e3a204ee7: Gained carrier Sep 4 17:44:13.334839 systemd-networkd[1371]: lxc479ad6f17b99: Gained carrier Sep 4 17:44:13.627814 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Sep 4 17:44:13.685606 kubelet[2532]: E0904 17:44:13.685528 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:14.459568 systemd-networkd[1371]: lxc_health: Gained IPv6LL Sep 4 17:44:14.692869 kubelet[2532]: E0904 17:44:14.692330 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:15.035563 systemd-networkd[1371]: lxc406e3a204ee7: Gained IPv6LL Sep 4 17:44:15.291558 systemd-networkd[1371]: lxc479ad6f17b99: Gained IPv6LL Sep 4 17:44:15.695041 kubelet[2532]: E0904 17:44:15.694985 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:16.823708 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:38596.service - OpenSSH per-connection server daemon (10.0.0.1:38596). Sep 4 17:44:16.859025 sshd[3782]: Accepted publickey for core from 10.0.0.1 port 38596 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:16.860526 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:16.866490 systemd-logind[1415]: New session 9 of user core. Sep 4 17:44:16.873571 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:44:16.956587 containerd[1439]: time="2024-09-04T17:44:16.955520731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:16.956587 containerd[1439]: time="2024-09-04T17:44:16.955579735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:16.956587 containerd[1439]: time="2024-09-04T17:44:16.955590776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:16.956587 containerd[1439]: time="2024-09-04T17:44:16.955667021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:16.970441 containerd[1439]: time="2024-09-04T17:44:16.970335219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:16.970561 containerd[1439]: time="2024-09-04T17:44:16.970434666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:16.970561 containerd[1439]: time="2024-09-04T17:44:16.970453147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:16.970915 containerd[1439]: time="2024-09-04T17:44:16.970839212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:16.978573 systemd[1]: Started cri-containerd-afcd893693302dcea5315404ebfde4be99d24d770d8cfb7068f679ddc8f55ff3.scope - libcontainer container afcd893693302dcea5315404ebfde4be99d24d770d8cfb7068f679ddc8f55ff3. Sep 4 17:44:16.989950 systemd[1]: Started cri-containerd-095357cf679559cbb1c0662162d2c04e76d17fd750e96a00b2c14b51514d2ac6.scope - libcontainer container 095357cf679559cbb1c0662162d2c04e76d17fd750e96a00b2c14b51514d2ac6. Sep 4 17:44:16.997487 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:44:17.007914 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:44:17.023681 containerd[1439]: time="2024-09-04T17:44:17.023645862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fhmz4,Uid:d9dd5217-2a6b-4352-8f67-51086e5c3d44,Namespace:kube-system,Attempt:0,} returns sandbox id \"afcd893693302dcea5315404ebfde4be99d24d770d8cfb7068f679ddc8f55ff3\"" Sep 4 17:44:17.024424 kubelet[2532]: E0904 17:44:17.024383 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:17.029326 containerd[1439]: time="2024-09-04T17:44:17.029179774Z" level=info msg="CreateContainer within sandbox \"afcd893693302dcea5315404ebfde4be99d24d770d8cfb7068f679ddc8f55ff3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:44:17.033062 sshd[3782]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:17.036612 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:38596.service: Deactivated successfully. Sep 4 17:44:17.037366 containerd[1439]: time="2024-09-04T17:44:17.037315532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qc5v2,Uid:ebad9464-1642-44b3-a734-13ff3ec27066,Namespace:kube-system,Attempt:0,} returns sandbox id \"095357cf679559cbb1c0662162d2c04e76d17fd750e96a00b2c14b51514d2ac6\"" Sep 4 17:44:17.038163 kubelet[2532]: E0904 17:44:17.038145 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:17.038939 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:44:17.040019 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:44:17.041464 systemd-logind[1415]: Removed session 9. Sep 4 17:44:17.042193 containerd[1439]: time="2024-09-04T17:44:17.042168400Z" level=info msg="CreateContainer within sandbox \"095357cf679559cbb1c0662162d2c04e76d17fd750e96a00b2c14b51514d2ac6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:44:17.054952 containerd[1439]: time="2024-09-04T17:44:17.054911531Z" level=info msg="CreateContainer within sandbox \"afcd893693302dcea5315404ebfde4be99d24d770d8cfb7068f679ddc8f55ff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a440d4ba261c7a74488dbbaa1469f430d3a687b381cdc5f09191362ada344df6\"" Sep 4 17:44:17.056366 containerd[1439]: time="2024-09-04T17:44:17.055555892Z" level=info msg="StartContainer for \"a440d4ba261c7a74488dbbaa1469f430d3a687b381cdc5f09191362ada344df6\"" Sep 4 17:44:17.059584 containerd[1439]: time="2024-09-04T17:44:17.059551866Z" level=info msg="CreateContainer within sandbox \"095357cf679559cbb1c0662162d2c04e76d17fd750e96a00b2c14b51514d2ac6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74e256fa52728ebf62a0d47c7634d5a0a1ad9ea711b2e970fca2b733740ebe64\"" Sep 4 17:44:17.060407 containerd[1439]: time="2024-09-04T17:44:17.060330475Z" level=info msg="StartContainer for \"74e256fa52728ebf62a0d47c7634d5a0a1ad9ea711b2e970fca2b733740ebe64\"" Sep 4 17:44:17.082565 systemd[1]: Started cri-containerd-a440d4ba261c7a74488dbbaa1469f430d3a687b381cdc5f09191362ada344df6.scope - libcontainer container a440d4ba261c7a74488dbbaa1469f430d3a687b381cdc5f09191362ada344df6. Sep 4 17:44:17.085682 systemd[1]: Started cri-containerd-74e256fa52728ebf62a0d47c7634d5a0a1ad9ea711b2e970fca2b733740ebe64.scope - libcontainer container 74e256fa52728ebf62a0d47c7634d5a0a1ad9ea711b2e970fca2b733740ebe64. Sep 4 17:44:17.129104 containerd[1439]: time="2024-09-04T17:44:17.129011924Z" level=info msg="StartContainer for \"a440d4ba261c7a74488dbbaa1469f430d3a687b381cdc5f09191362ada344df6\" returns successfully" Sep 4 17:44:17.129104 containerd[1439]: time="2024-09-04T17:44:17.129088849Z" level=info msg="StartContainer for \"74e256fa52728ebf62a0d47c7634d5a0a1ad9ea711b2e970fca2b733740ebe64\" returns successfully" Sep 4 17:44:17.701981 kubelet[2532]: E0904 17:44:17.701944 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:17.705975 kubelet[2532]: E0904 17:44:17.705947 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:17.717775 kubelet[2532]: I0904 17:44:17.717628 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhmz4" podStartSLOduration=20.717611724 podStartE2EDuration="20.717611724s" podCreationTimestamp="2024-09-04 17:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:44:17.712596445 +0000 UTC m=+36.224429840" watchObservedRunningTime="2024-09-04 17:44:17.717611724 +0000 UTC m=+36.229445119" Sep 4 17:44:17.774130 kubelet[2532]: I0904 17:44:17.774034 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qc5v2" podStartSLOduration=20.774017951 podStartE2EDuration="20.774017951s" podCreationTimestamp="2024-09-04 17:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:44:17.773823019 +0000 UTC m=+36.285656414" watchObservedRunningTime="2024-09-04 17:44:17.774017951 +0000 UTC m=+36.285851346" Sep 4 17:44:18.707204 kubelet[2532]: E0904 17:44:18.707070 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:18.707204 kubelet[2532]: E0904 17:44:18.707141 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:19.708986 kubelet[2532]: E0904 17:44:19.708955 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:22.040940 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:38608.service - OpenSSH per-connection server daemon (10.0.0.1:38608). Sep 4 17:44:22.079546 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 38608 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:22.080865 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:22.084166 systemd-logind[1415]: New session 10 of user core. Sep 4 17:44:22.089533 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:44:22.198662 sshd[3966]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:22.201878 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:38608.service: Deactivated successfully. Sep 4 17:44:22.203584 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:44:22.204144 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:44:22.204835 systemd-logind[1415]: Removed session 10. Sep 4 17:44:27.214270 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:42808.service - OpenSSH per-connection server daemon (10.0.0.1:42808). Sep 4 17:44:27.252337 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 42808 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:27.254728 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:27.259697 systemd-logind[1415]: New session 11 of user core. Sep 4 17:44:27.272594 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:44:27.396415 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:27.408017 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:42808.service: Deactivated successfully. Sep 4 17:44:27.409824 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:44:27.411817 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:44:27.432025 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:42824.service - OpenSSH per-connection server daemon (10.0.0.1:42824). Sep 4 17:44:27.433377 systemd-logind[1415]: Removed session 11. Sep 4 17:44:27.470311 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 42824 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:27.471519 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:27.475352 systemd-logind[1415]: New session 12 of user core. Sep 4 17:44:27.485579 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:44:27.636694 sshd[3999]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:27.648103 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:42824.service: Deactivated successfully. Sep 4 17:44:27.651628 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:44:27.654174 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:44:27.665848 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:42828.service - OpenSSH per-connection server daemon (10.0.0.1:42828). Sep 4 17:44:27.667235 systemd-logind[1415]: Removed session 12. Sep 4 17:44:27.701434 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 42828 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:27.702784 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:27.707289 systemd-logind[1415]: New session 13 of user core. Sep 4 17:44:27.720618 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:44:27.845246 sshd[4011]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:27.848839 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:42828.service: Deactivated successfully. Sep 4 17:44:27.850625 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:44:27.851243 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:44:27.852380 systemd-logind[1415]: Removed session 13. Sep 4 17:44:32.858879 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:45244.service - OpenSSH per-connection server daemon (10.0.0.1:45244). Sep 4 17:44:32.895933 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 45244 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:32.897118 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:32.901120 systemd-logind[1415]: New session 14 of user core. Sep 4 17:44:32.914555 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:44:33.020068 sshd[4029]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:33.033867 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:45244.service: Deactivated successfully. Sep 4 17:44:33.035359 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:44:33.037741 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:44:33.051809 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:45258.service - OpenSSH per-connection server daemon (10.0.0.1:45258). Sep 4 17:44:33.053147 systemd-logind[1415]: Removed session 14. Sep 4 17:44:33.084162 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 45258 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:33.085441 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:33.089473 systemd-logind[1415]: New session 15 of user core. Sep 4 17:44:33.104558 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:44:33.330331 sshd[4044]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:33.342895 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:45258.service: Deactivated successfully. Sep 4 17:44:33.344544 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:44:33.345854 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:44:33.353864 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:45260.service - OpenSSH per-connection server daemon (10.0.0.1:45260). Sep 4 17:44:33.355110 systemd-logind[1415]: Removed session 15. Sep 4 17:44:33.387841 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 45260 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:33.389204 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:33.393141 systemd-logind[1415]: New session 16 of user core. Sep 4 17:44:33.404564 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:44:34.669798 sshd[4056]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:34.678242 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:45260.service: Deactivated successfully. Sep 4 17:44:34.683240 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:44:34.687172 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:44:34.701798 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:45270.service - OpenSSH per-connection server daemon (10.0.0.1:45270). Sep 4 17:44:34.702420 systemd-logind[1415]: Removed session 16. Sep 4 17:44:34.739513 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 45270 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:34.741069 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:34.745562 systemd-logind[1415]: New session 17 of user core. Sep 4 17:44:34.756756 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:44:34.971075 sshd[4084]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:34.979194 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:45270.service: Deactivated successfully. Sep 4 17:44:34.981043 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:44:34.985450 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:44:34.997676 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:45286.service - OpenSSH per-connection server daemon (10.0.0.1:45286). Sep 4 17:44:34.998541 systemd-logind[1415]: Removed session 17. Sep 4 17:44:35.030849 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 45286 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:35.032240 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:35.035813 systemd-logind[1415]: New session 18 of user core. Sep 4 17:44:35.044589 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:44:35.150971 sshd[4097]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:35.154520 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:45286.service: Deactivated successfully. Sep 4 17:44:35.156142 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:44:35.156724 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:44:35.157468 systemd-logind[1415]: Removed session 18. Sep 4 17:44:40.172772 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:45290.service - OpenSSH per-connection server daemon (10.0.0.1:45290). Sep 4 17:44:40.210745 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 45290 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:40.212178 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:40.215639 systemd-logind[1415]: New session 19 of user core. Sep 4 17:44:40.221592 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:44:40.328475 sshd[4115]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:40.331601 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:45290.service: Deactivated successfully. Sep 4 17:44:40.333243 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:44:40.333931 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:44:40.334784 systemd-logind[1415]: Removed session 19. Sep 4 17:44:45.343304 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:52090.service - OpenSSH per-connection server daemon (10.0.0.1:52090). Sep 4 17:44:45.381788 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 52090 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:45.383288 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:45.387576 systemd-logind[1415]: New session 20 of user core. Sep 4 17:44:45.403666 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:44:45.518377 sshd[4131]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:45.522133 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:52090.service: Deactivated successfully. Sep 4 17:44:45.524916 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:44:45.525998 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:44:45.527257 systemd-logind[1415]: Removed session 20. Sep 4 17:44:50.529506 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:52100.service - OpenSSH per-connection server daemon (10.0.0.1:52100). Sep 4 17:44:50.568108 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 52100 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:50.569636 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:50.573827 systemd-logind[1415]: New session 21 of user core. Sep 4 17:44:50.595603 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:44:50.700372 sshd[4147]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:50.715099 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:52100.service: Deactivated successfully. Sep 4 17:44:50.716881 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:44:50.718416 systemd-logind[1415]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:44:50.719947 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:52114.service - OpenSSH per-connection server daemon (10.0.0.1:52114). Sep 4 17:44:50.720968 systemd-logind[1415]: Removed session 21. Sep 4 17:44:50.758524 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 52114 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:50.759897 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:50.763998 systemd-logind[1415]: New session 22 of user core. Sep 4 17:44:50.773596 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:44:52.616832 containerd[1439]: time="2024-09-04T17:44:52.616779419Z" level=info msg="StopContainer for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" with timeout 30 (s)" Sep 4 17:44:52.619105 containerd[1439]: time="2024-09-04T17:44:52.617241132Z" level=info msg="Stop container \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" with signal terminated" Sep 4 17:44:52.626147 systemd[1]: cri-containerd-97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9.scope: Deactivated successfully. Sep 4 17:44:52.645829 containerd[1439]: time="2024-09-04T17:44:52.645791999Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:44:52.651043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9-rootfs.mount: Deactivated successfully. Sep 4 17:44:52.656459 containerd[1439]: time="2024-09-04T17:44:52.656368431Z" level=info msg="StopContainer for \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\" with timeout 2 (s)" Sep 4 17:44:52.656715 containerd[1439]: time="2024-09-04T17:44:52.656671626Z" level=info msg="Stop container \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\" with signal terminated" Sep 4 17:44:52.663964 containerd[1439]: time="2024-09-04T17:44:52.663917551Z" level=info msg="shim disconnected" id=97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9 namespace=k8s.io Sep 4 17:44:52.663964 containerd[1439]: time="2024-09-04T17:44:52.663961190Z" level=warning msg="cleaning up after shim disconnected" id=97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9 namespace=k8s.io Sep 4 17:44:52.663964 containerd[1439]: time="2024-09-04T17:44:52.663969310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:52.667874 systemd-networkd[1371]: lxc_health: Link DOWN Sep 4 17:44:52.667880 systemd-networkd[1371]: lxc_health: Lost carrier Sep 4 17:44:52.691897 systemd[1]: cri-containerd-69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3.scope: Deactivated successfully. Sep 4 17:44:52.692511 systemd[1]: cri-containerd-69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3.scope: Consumed 6.623s CPU time. Sep 4 17:44:52.705431 containerd[1439]: time="2024-09-04T17:44:52.704991539Z" level=info msg="StopContainer for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" returns successfully" Sep 4 17:44:52.705637 containerd[1439]: time="2024-09-04T17:44:52.705605569Z" level=info msg="StopPodSandbox for \"ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab\"" Sep 4 17:44:52.705683 containerd[1439]: time="2024-09-04T17:44:52.705643888Z" level=info msg="Container to stop \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:44:52.707452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab-shm.mount: Deactivated successfully. Sep 4 17:44:52.711694 systemd[1]: cri-containerd-ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab.scope: Deactivated successfully. Sep 4 17:44:52.732875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3-rootfs.mount: Deactivated successfully. Sep 4 17:44:52.739360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab-rootfs.mount: Deactivated successfully. Sep 4 17:44:52.742340 containerd[1439]: time="2024-09-04T17:44:52.742078390Z" level=info msg="shim disconnected" id=69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3 namespace=k8s.io Sep 4 17:44:52.742340 containerd[1439]: time="2024-09-04T17:44:52.742133989Z" level=warning msg="cleaning up after shim disconnected" id=69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3 namespace=k8s.io Sep 4 17:44:52.742340 containerd[1439]: time="2024-09-04T17:44:52.742153069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:52.743054 containerd[1439]: time="2024-09-04T17:44:52.742864817Z" level=info msg="shim disconnected" id=ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab namespace=k8s.io Sep 4 17:44:52.743054 containerd[1439]: time="2024-09-04T17:44:52.742918937Z" level=warning msg="cleaning up after shim disconnected" id=ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab namespace=k8s.io Sep 4 17:44:52.743054 containerd[1439]: time="2024-09-04T17:44:52.742926897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:52.755566 containerd[1439]: time="2024-09-04T17:44:52.755379739Z" level=info msg="TearDown network for sandbox \"ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab\" successfully" Sep 4 17:44:52.755566 containerd[1439]: time="2024-09-04T17:44:52.755447698Z" level=info msg="StopPodSandbox for \"ae7312fa0979e0045682dd57b48297d86af647ef8a99efcafd6a6e58aec2caab\" returns successfully" Sep 4 17:44:52.757086 containerd[1439]: time="2024-09-04T17:44:52.757047112Z" level=info msg="StopContainer for \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\" returns successfully" Sep 4 17:44:52.757617 containerd[1439]: time="2024-09-04T17:44:52.757459066Z" level=info msg="StopPodSandbox for \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\"" Sep 4 17:44:52.757617 containerd[1439]: time="2024-09-04T17:44:52.757497985Z" level=info msg="Container to stop \"88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:44:52.757617 containerd[1439]: time="2024-09-04T17:44:52.757510585Z" level=info msg="Container to stop \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:44:52.757617 containerd[1439]: time="2024-09-04T17:44:52.757520065Z" level=info msg="Container to stop \"031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:44:52.757617 containerd[1439]: time="2024-09-04T17:44:52.757528825Z" level=info msg="Container to stop \"939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:44:52.757617 containerd[1439]: time="2024-09-04T17:44:52.757537585Z" level=info msg="Container to stop \"d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:44:52.766039 systemd[1]: cri-containerd-bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800.scope: Deactivated successfully. Sep 4 17:44:52.788086 kubelet[2532]: I0904 17:44:52.786983 2532 scope.go:117] "RemoveContainer" containerID="97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9" Sep 4 17:44:52.789623 containerd[1439]: time="2024-09-04T17:44:52.789587356Z" level=info msg="RemoveContainer for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\"" Sep 4 17:44:52.794051 containerd[1439]: time="2024-09-04T17:44:52.794023125Z" level=info msg="RemoveContainer for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" returns successfully" Sep 4 17:44:52.794277 kubelet[2532]: I0904 17:44:52.794252 2532 scope.go:117] "RemoveContainer" containerID="97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9" Sep 4 17:44:52.794510 containerd[1439]: time="2024-09-04T17:44:52.794476798Z" level=error msg="ContainerStatus for \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\": not found" Sep 4 17:44:52.802720 containerd[1439]: time="2024-09-04T17:44:52.802666028Z" level=info msg="shim disconnected" id=bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800 namespace=k8s.io Sep 4 17:44:52.802720 containerd[1439]: time="2024-09-04T17:44:52.802715507Z" level=warning msg="cleaning up after shim disconnected" id=bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800 namespace=k8s.io Sep 4 17:44:52.802720 containerd[1439]: time="2024-09-04T17:44:52.802724147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:52.804715 kubelet[2532]: E0904 17:44:52.804673 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\": not found" containerID="97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9" Sep 4 17:44:52.804822 kubelet[2532]: I0904 17:44:52.804707 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9"} err="failed to get container status \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\": rpc error: code = NotFound desc = an error occurred when try to find container \"97c4266456a238c54313e24698ee4c6f8cfdb58eff240071751017cff53d5dc9\": not found" Sep 4 17:44:52.813054 containerd[1439]: time="2024-09-04T17:44:52.813011464Z" level=info msg="TearDown network for sandbox \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" successfully" Sep 4 17:44:52.813054 containerd[1439]: time="2024-09-04T17:44:52.813043103Z" level=info msg="StopPodSandbox for \"bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800\" returns successfully" Sep 4 17:44:52.924730 kubelet[2532]: I0904 17:44:52.924693 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-xtables-lock\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.924730 kubelet[2532]: I0904 17:44:52.924731 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-cgroup\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.924866 kubelet[2532]: I0904 17:44:52.924753 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cni-path\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.924866 kubelet[2532]: I0904 17:44:52.924773 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-clustermesh-secrets\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.924866 kubelet[2532]: I0904 17:44:52.924787 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-net\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.924866 kubelet[2532]: I0904 17:44:52.924801 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-kernel\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.924866 kubelet[2532]: I0904 17:44:52.924826 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44048fb6-256a-41d3-8bee-d1640f488f64-cilium-config-path\") pod \"44048fb6-256a-41d3-8bee-d1640f488f64\" (UID: \"44048fb6-256a-41d3-8bee-d1640f488f64\") " Sep 4 17:44:52.924866 kubelet[2532]: I0904 17:44:52.924845 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hubble-tls\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925007 kubelet[2532]: I0904 17:44:52.924858 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-bpf-maps\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925007 kubelet[2532]: I0904 17:44:52.924875 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-config-path\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925007 kubelet[2532]: I0904 17:44:52.924892 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9z58\" (UniqueName: \"kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-kube-api-access-d9z58\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925007 kubelet[2532]: I0904 17:44:52.924910 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lkkb\" (UniqueName: \"kubernetes.io/projected/44048fb6-256a-41d3-8bee-d1640f488f64-kube-api-access-8lkkb\") pod \"44048fb6-256a-41d3-8bee-d1640f488f64\" (UID: \"44048fb6-256a-41d3-8bee-d1640f488f64\") " Sep 4 17:44:52.925007 kubelet[2532]: I0904 17:44:52.924928 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hostproc\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925007 kubelet[2532]: I0904 17:44:52.924942 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-run\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925143 kubelet[2532]: I0904 17:44:52.924959 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-lib-modules\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.925143 kubelet[2532]: I0904 17:44:52.924973 2532 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-etc-cni-netd\") pod \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\" (UID: \"8af1ef57-ea69-4dee-9393-adb6f0a9b7a0\") " Sep 4 17:44:52.929172 kubelet[2532]: I0904 17:44:52.929137 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.929222 kubelet[2532]: I0904 17:44:52.929195 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.929524 kubelet[2532]: I0904 17:44:52.929495 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.930787 kubelet[2532]: I0904 17:44:52.930285 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.931924 kubelet[2532]: I0904 17:44:52.931885 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:44:52.931992 kubelet[2532]: I0904 17:44:52.931929 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.931992 kubelet[2532]: I0904 17:44:52.931949 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.932547 kubelet[2532]: I0904 17:44:52.932516 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44048fb6-256a-41d3-8bee-d1640f488f64-kube-api-access-8lkkb" (OuterVolumeSpecName: "kube-api-access-8lkkb") pod "44048fb6-256a-41d3-8bee-d1640f488f64" (UID: "44048fb6-256a-41d3-8bee-d1640f488f64"). InnerVolumeSpecName "kube-api-access-8lkkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:44:52.933291 kubelet[2532]: I0904 17:44:52.933251 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:44:52.933903 kubelet[2532]: I0904 17:44:52.933869 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44048fb6-256a-41d3-8bee-d1640f488f64-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44048fb6-256a-41d3-8bee-d1640f488f64" (UID: "44048fb6-256a-41d3-8bee-d1640f488f64"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:44:52.933968 kubelet[2532]: I0904 17:44:52.933961 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.933995 kubelet[2532]: I0904 17:44:52.933981 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.934024 kubelet[2532]: I0904 17:44:52.933998 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.934740 kubelet[2532]: I0904 17:44:52.934715 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-kube-api-access-d9z58" (OuterVolumeSpecName: "kube-api-access-d9z58") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "kube-api-access-d9z58". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:44:52.934925 kubelet[2532]: I0904 17:44:52.934905 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:44:52.935298 kubelet[2532]: I0904 17:44:52.935260 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" (UID: "8af1ef57-ea69-4dee-9393-adb6f0a9b7a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:44:53.025559 kubelet[2532]: I0904 17:44:53.025499 2532 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025559 kubelet[2532]: I0904 17:44:53.025544 2532 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025559 kubelet[2532]: I0904 17:44:53.025560 2532 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025574 2532 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025591 2532 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025604 2532 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025619 2532 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025634 2532 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025663 2532 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025677 2532 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025737 kubelet[2532]: I0904 17:44:53.025691 2532 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025998 kubelet[2532]: I0904 17:44:53.025705 2532 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44048fb6-256a-41d3-8bee-d1640f488f64-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025998 kubelet[2532]: I0904 17:44:53.025713 2532 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025998 kubelet[2532]: I0904 17:44:53.025721 2532 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025998 kubelet[2532]: I0904 17:44:53.025728 2532 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-d9z58\" (UniqueName: \"kubernetes.io/projected/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0-kube-api-access-d9z58\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.025998 kubelet[2532]: I0904 17:44:53.025738 2532 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8lkkb\" (UniqueName: \"kubernetes.io/projected/44048fb6-256a-41d3-8bee-d1640f488f64-kube-api-access-8lkkb\") on node \"localhost\" DevicePath \"\"" Sep 4 17:44:53.090896 systemd[1]: Removed slice kubepods-besteffort-pod44048fb6_256a_41d3_8bee_d1640f488f64.slice - libcontainer container kubepods-besteffort-pod44048fb6_256a_41d3_8bee_d1640f488f64.slice. Sep 4 17:44:53.571696 kubelet[2532]: I0904 17:44:53.571646 2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44048fb6-256a-41d3-8bee-d1640f488f64" path="/var/lib/kubelet/pods/44048fb6-256a-41d3-8bee-d1640f488f64/volumes" Sep 4 17:44:53.575903 systemd[1]: Removed slice kubepods-burstable-pod8af1ef57_ea69_4dee_9393_adb6f0a9b7a0.slice - libcontainer container kubepods-burstable-pod8af1ef57_ea69_4dee_9393_adb6f0a9b7a0.slice. Sep 4 17:44:53.575985 systemd[1]: kubepods-burstable-pod8af1ef57_ea69_4dee_9393_adb6f0a9b7a0.slice: Consumed 6.764s CPU time. Sep 4 17:44:53.631942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800-rootfs.mount: Deactivated successfully. Sep 4 17:44:53.632051 systemd[1]: var-lib-kubelet-pods-44048fb6\x2d256a\x2d41d3\x2d8bee\x2dd1640f488f64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8lkkb.mount: Deactivated successfully. Sep 4 17:44:53.632125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb5fa8aa42ad0b670b338e453843c4e3fae70f9d5f2b67b8d109c36df907f800-shm.mount: Deactivated successfully. Sep 4 17:44:53.632189 systemd[1]: var-lib-kubelet-pods-8af1ef57\x2dea69\x2d4dee\x2d9393\x2dadb6f0a9b7a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9z58.mount: Deactivated successfully. Sep 4 17:44:53.632252 systemd[1]: var-lib-kubelet-pods-8af1ef57\x2dea69\x2d4dee\x2d9393\x2dadb6f0a9b7a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:44:53.632303 systemd[1]: var-lib-kubelet-pods-8af1ef57\x2dea69\x2d4dee\x2d9393\x2dadb6f0a9b7a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:44:53.791797 kubelet[2532]: I0904 17:44:53.791620 2532 scope.go:117] "RemoveContainer" containerID="69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3" Sep 4 17:44:53.793238 containerd[1439]: time="2024-09-04T17:44:53.792889882Z" level=info msg="RemoveContainer for \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\"" Sep 4 17:44:53.796075 containerd[1439]: time="2024-09-04T17:44:53.796045636Z" level=info msg="RemoveContainer for \"69a987015356cb74d54adc45d9a722fd0e7f3a6acf88dceb78561544ab704ff3\" returns successfully" Sep 4 17:44:53.796296 kubelet[2532]: I0904 17:44:53.796233 2532 scope.go:117] "RemoveContainer" containerID="88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b" Sep 4 17:44:53.797727 containerd[1439]: time="2024-09-04T17:44:53.797699572Z" level=info msg="RemoveContainer for \"88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b\"" Sep 4 17:44:53.800300 containerd[1439]: time="2024-09-04T17:44:53.800129777Z" level=info msg="RemoveContainer for \"88202678a4ae82683f1d381c69304a73db4cc3333adfe40eaa275ea464a6870b\" returns successfully" Sep 4 17:44:53.800992 kubelet[2532]: I0904 17:44:53.800574 2532 scope.go:117] "RemoveContainer" containerID="d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984" Sep 4 17:44:53.802110 containerd[1439]: time="2024-09-04T17:44:53.801924431Z" level=info msg="RemoveContainer for \"d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984\"" Sep 4 17:44:53.804747 containerd[1439]: time="2024-09-04T17:44:53.804711231Z" level=info msg="RemoveContainer for \"d4d265d571c6673841633d165935240fa292fc4980e33d177a5a11fafd184984\" returns successfully" Sep 4 17:44:53.805151 kubelet[2532]: I0904 17:44:53.805127 2532 scope.go:117] "RemoveContainer" containerID="939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a" Sep 4 17:44:53.806525 containerd[1439]: time="2024-09-04T17:44:53.806492045Z" level=info msg="RemoveContainer for \"939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a\"" Sep 4 17:44:53.808744 containerd[1439]: time="2024-09-04T17:44:53.808712013Z" level=info msg="RemoveContainer for \"939c22a39fe960167b53ce0b2d4c375683f939d244d4eeb9e7b3c2c294e29d2a\" returns successfully" Sep 4 17:44:53.808925 kubelet[2532]: I0904 17:44:53.808879 2532 scope.go:117] "RemoveContainer" containerID="031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812" Sep 4 17:44:53.809908 containerd[1439]: time="2024-09-04T17:44:53.809881236Z" level=info msg="RemoveContainer for \"031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812\"" Sep 4 17:44:53.812015 containerd[1439]: time="2024-09-04T17:44:53.811980966Z" level=info msg="RemoveContainer for \"031c8288f62d078cf48780b51aae2425f1b192c8a06feddc22d42f84c1397812\" returns successfully" Sep 4 17:44:54.580429 sshd[4162]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:54.588760 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:52114.service: Deactivated successfully. Sep 4 17:44:54.590164 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:44:54.590332 systemd[1]: session-22.scope: Consumed 1.181s CPU time. Sep 4 17:44:54.591541 systemd-logind[1415]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:44:54.601653 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:51394.service - OpenSSH per-connection server daemon (10.0.0.1:51394). Sep 4 17:44:54.602455 systemd-logind[1415]: Removed session 22. Sep 4 17:44:54.634438 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 51394 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:54.635815 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:54.639470 systemd-logind[1415]: New session 23 of user core. Sep 4 17:44:54.645549 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:44:55.454452 sshd[4324]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:55.466578 kubelet[2532]: I0904 17:44:55.466527 2532 topology_manager.go:215] "Topology Admit Handler" podUID="e855850d-acf8-443c-8ad1-80c7a20b9736" podNamespace="kube-system" podName="cilium-72vwk" Sep 4 17:44:55.466882 kubelet[2532]: E0904 17:44:55.466716 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" containerName="mount-cgroup" Sep 4 17:44:55.466882 kubelet[2532]: E0904 17:44:55.466727 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" containerName="apply-sysctl-overwrites" Sep 4 17:44:55.466882 kubelet[2532]: E0904 17:44:55.466734 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44048fb6-256a-41d3-8bee-d1640f488f64" containerName="cilium-operator" Sep 4 17:44:55.466882 kubelet[2532]: E0904 17:44:55.466740 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" containerName="clean-cilium-state" Sep 4 17:44:55.466882 kubelet[2532]: E0904 17:44:55.466747 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" containerName="cilium-agent" Sep 4 17:44:55.466882 kubelet[2532]: E0904 17:44:55.466753 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" containerName="mount-bpf-fs" Sep 4 17:44:55.466882 kubelet[2532]: I0904 17:44:55.466773 2532 memory_manager.go:354] "RemoveStaleState removing state" podUID="44048fb6-256a-41d3-8bee-d1640f488f64" containerName="cilium-operator" Sep 4 17:44:55.466882 kubelet[2532]: I0904 17:44:55.466779 2532 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" containerName="cilium-agent" Sep 4 17:44:55.467805 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:51394.service: Deactivated successfully. Sep 4 17:44:55.471164 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:44:55.479971 systemd-logind[1415]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:44:55.489788 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:51410.service - OpenSSH per-connection server daemon (10.0.0.1:51410). Sep 4 17:44:55.495343 systemd-logind[1415]: Removed session 23. Sep 4 17:44:55.500853 systemd[1]: Created slice kubepods-burstable-pode855850d_acf8_443c_8ad1_80c7a20b9736.slice - libcontainer container kubepods-burstable-pode855850d_acf8_443c_8ad1_80c7a20b9736.slice. Sep 4 17:44:55.534412 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 51410 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:55.535007 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:55.538714 systemd-logind[1415]: New session 24 of user core. Sep 4 17:44:55.548598 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:44:55.572483 kubelet[2532]: I0904 17:44:55.571978 2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af1ef57-ea69-4dee-9393-adb6f0a9b7a0" path="/var/lib/kubelet/pods/8af1ef57-ea69-4dee-9393-adb6f0a9b7a0/volumes" Sep 4 17:44:55.599330 sshd[4337]: pam_unix(sshd:session): session closed for user core Sep 4 17:44:55.609827 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:51410.service: Deactivated successfully. Sep 4 17:44:55.611948 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:44:55.613221 systemd-logind[1415]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:44:55.623774 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:51414.service - OpenSSH per-connection server daemon (10.0.0.1:51414). Sep 4 17:44:55.624701 systemd-logind[1415]: Removed session 24. Sep 4 17:44:55.639794 kubelet[2532]: I0904 17:44:55.639749 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-bpf-maps\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.639794 kubelet[2532]: I0904 17:44:55.639792 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e855850d-acf8-443c-8ad1-80c7a20b9736-clustermesh-secrets\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.639909 kubelet[2532]: I0904 17:44:55.639816 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e855850d-acf8-443c-8ad1-80c7a20b9736-cilium-config-path\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.639909 kubelet[2532]: I0904 17:44:55.639840 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-hostproc\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.639909 kubelet[2532]: I0904 17:44:55.639857 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-cni-path\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.639909 kubelet[2532]: I0904 17:44:55.639895 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-host-proc-sys-net\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640003 kubelet[2532]: I0904 17:44:55.639928 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e855850d-acf8-443c-8ad1-80c7a20b9736-hubble-tls\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640003 kubelet[2532]: I0904 17:44:55.639948 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-lib-modules\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640003 kubelet[2532]: I0904 17:44:55.639966 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-xtables-lock\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640003 kubelet[2532]: I0904 17:44:55.639981 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29dt\" (UniqueName: \"kubernetes.io/projected/e855850d-acf8-443c-8ad1-80c7a20b9736-kube-api-access-p29dt\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640003 kubelet[2532]: I0904 17:44:55.640000 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-etc-cni-netd\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640119 kubelet[2532]: I0904 17:44:55.640017 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e855850d-acf8-443c-8ad1-80c7a20b9736-cilium-ipsec-secrets\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640119 kubelet[2532]: I0904 17:44:55.640034 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-host-proc-sys-kernel\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640119 kubelet[2532]: I0904 17:44:55.640049 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-cilium-run\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.640119 kubelet[2532]: I0904 17:44:55.640063 2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e855850d-acf8-443c-8ad1-80c7a20b9736-cilium-cgroup\") pod \"cilium-72vwk\" (UID: \"e855850d-acf8-443c-8ad1-80c7a20b9736\") " pod="kube-system/cilium-72vwk" Sep 4 17:44:55.656542 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 51414 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:44:55.657799 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:44:55.662103 systemd-logind[1415]: New session 25 of user core. Sep 4 17:44:55.675812 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:44:55.807364 kubelet[2532]: E0904 17:44:55.807244 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:55.807800 containerd[1439]: time="2024-09-04T17:44:55.807768101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72vwk,Uid:e855850d-acf8-443c-8ad1-80c7a20b9736,Namespace:kube-system,Attempt:0,}" Sep 4 17:44:55.825239 containerd[1439]: time="2024-09-04T17:44:55.824993699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:44:55.825239 containerd[1439]: time="2024-09-04T17:44:55.825046818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:44:55.825239 containerd[1439]: time="2024-09-04T17:44:55.825117177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:55.826242 containerd[1439]: time="2024-09-04T17:44:55.825248656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:44:55.841682 systemd[1]: Started cri-containerd-4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488.scope - libcontainer container 4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488. Sep 4 17:44:55.872330 containerd[1439]: time="2024-09-04T17:44:55.872286345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72vwk,Uid:e855850d-acf8-443c-8ad1-80c7a20b9736,Namespace:kube-system,Attempt:0,} returns sandbox id \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\"" Sep 4 17:44:55.872985 kubelet[2532]: E0904 17:44:55.872965 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:55.874633 containerd[1439]: time="2024-09-04T17:44:55.874600198Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:44:55.883589 containerd[1439]: time="2024-09-04T17:44:55.883548133Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a\"" Sep 4 17:44:55.883919 containerd[1439]: time="2024-09-04T17:44:55.883895249Z" level=info msg="StartContainer for \"f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a\"" Sep 4 17:44:55.910575 systemd[1]: Started cri-containerd-f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a.scope - libcontainer container f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a. Sep 4 17:44:55.930469 containerd[1439]: time="2024-09-04T17:44:55.930394265Z" level=info msg="StartContainer for \"f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a\" returns successfully" Sep 4 17:44:55.941379 systemd[1]: cri-containerd-f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a.scope: Deactivated successfully. Sep 4 17:44:55.966285 containerd[1439]: time="2024-09-04T17:44:55.966225166Z" level=info msg="shim disconnected" id=f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a namespace=k8s.io Sep 4 17:44:55.966285 containerd[1439]: time="2024-09-04T17:44:55.966278205Z" level=warning msg="cleaning up after shim disconnected" id=f094df7dafef5272a51ce92bae317728a97f619ddcafe8d98a1138cfb5a9a46a namespace=k8s.io Sep 4 17:44:55.966285 containerd[1439]: time="2024-09-04T17:44:55.966286205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:56.618507 kubelet[2532]: E0904 17:44:56.618457 2532 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:44:56.800284 kubelet[2532]: E0904 17:44:56.800246 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:56.803075 containerd[1439]: time="2024-09-04T17:44:56.802899016Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:44:56.813841 containerd[1439]: time="2024-09-04T17:44:56.813795103Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7\"" Sep 4 17:44:56.815387 containerd[1439]: time="2024-09-04T17:44:56.814288418Z" level=info msg="StartContainer for \"942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7\"" Sep 4 17:44:56.842560 systemd[1]: Started cri-containerd-942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7.scope - libcontainer container 942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7. Sep 4 17:44:56.861728 containerd[1439]: time="2024-09-04T17:44:56.861637325Z" level=info msg="StartContainer for \"942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7\" returns successfully" Sep 4 17:44:56.876415 systemd[1]: cri-containerd-942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7.scope: Deactivated successfully. Sep 4 17:44:56.891832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7-rootfs.mount: Deactivated successfully. Sep 4 17:44:56.894582 containerd[1439]: time="2024-09-04T17:44:56.894531383Z" level=info msg="shim disconnected" id=942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7 namespace=k8s.io Sep 4 17:44:56.894836 containerd[1439]: time="2024-09-04T17:44:56.894692381Z" level=warning msg="cleaning up after shim disconnected" id=942be9b1e1aa40979f61b913642dd69129e72c19007cbd03f3b1bcc6d2a8ced7 namespace=k8s.io Sep 4 17:44:56.894836 containerd[1439]: time="2024-09-04T17:44:56.894707101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:57.807159 kubelet[2532]: E0904 17:44:57.806908 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:57.811484 containerd[1439]: time="2024-09-04T17:44:57.810472557Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:44:57.826419 containerd[1439]: time="2024-09-04T17:44:57.826343332Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127\"" Sep 4 17:44:57.826907 containerd[1439]: time="2024-09-04T17:44:57.826874687Z" level=info msg="StartContainer for \"379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127\"" Sep 4 17:44:57.874662 systemd[1]: Started cri-containerd-379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127.scope - libcontainer container 379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127. Sep 4 17:44:57.899085 containerd[1439]: time="2024-09-04T17:44:57.898940188Z" level=info msg="StartContainer for \"379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127\" returns successfully" Sep 4 17:44:57.900419 systemd[1]: cri-containerd-379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127.scope: Deactivated successfully. Sep 4 17:44:57.922011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127-rootfs.mount: Deactivated successfully. Sep 4 17:44:57.928247 containerd[1439]: time="2024-09-04T17:44:57.928159961Z" level=info msg="shim disconnected" id=379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127 namespace=k8s.io Sep 4 17:44:57.928247 containerd[1439]: time="2024-09-04T17:44:57.928235321Z" level=warning msg="cleaning up after shim disconnected" id=379201cf676a33805c17660dd28e1be06cc5fd9d548500784da33e923cf75127 namespace=k8s.io Sep 4 17:44:57.928247 containerd[1439]: time="2024-09-04T17:44:57.928244121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:58.812724 kubelet[2532]: E0904 17:44:58.812688 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:58.816983 containerd[1439]: time="2024-09-04T17:44:58.816937357Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:44:58.828498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442886216.mount: Deactivated successfully. Sep 4 17:44:58.830557 containerd[1439]: time="2024-09-04T17:44:58.830354291Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19\"" Sep 4 17:44:58.832114 containerd[1439]: time="2024-09-04T17:44:58.831989798Z" level=info msg="StartContainer for \"32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19\"" Sep 4 17:44:58.855087 systemd[1]: run-containerd-runc-k8s.io-32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19-runc.6tMFMD.mount: Deactivated successfully. Sep 4 17:44:58.866626 systemd[1]: Started cri-containerd-32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19.scope - libcontainer container 32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19. Sep 4 17:44:58.888455 systemd[1]: cri-containerd-32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19.scope: Deactivated successfully. Sep 4 17:44:58.891995 containerd[1439]: time="2024-09-04T17:44:58.891937924Z" level=info msg="StartContainer for \"32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19\" returns successfully" Sep 4 17:44:58.915445 containerd[1439]: time="2024-09-04T17:44:58.915331059Z" level=info msg="shim disconnected" id=32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19 namespace=k8s.io Sep 4 17:44:58.915717 containerd[1439]: time="2024-09-04T17:44:58.915504217Z" level=warning msg="cleaning up after shim disconnected" id=32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19 namespace=k8s.io Sep 4 17:44:58.915717 containerd[1439]: time="2024-09-04T17:44:58.915518057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:44:59.816826 kubelet[2532]: E0904 17:44:59.816368 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:44:59.819562 containerd[1439]: time="2024-09-04T17:44:59.819518514Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:44:59.826759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32bc63c307f7b3f92527a50153c012914b0c91b824dc29b61ac3d16f3a605e19-rootfs.mount: Deactivated successfully. Sep 4 17:44:59.844498 containerd[1439]: time="2024-09-04T17:44:59.844374347Z" level=info msg="CreateContainer within sandbox \"4114050580b89b168416a70879b4bca8fe680969d2f500684e9df52d6ccea488\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a97ce152058a7c72699b0bdac022a9cd426bac85616dd4a19d9cb9ca8c2a86e\"" Sep 4 17:44:59.846478 containerd[1439]: time="2024-09-04T17:44:59.846288094Z" level=info msg="StartContainer for \"1a97ce152058a7c72699b0bdac022a9cd426bac85616dd4a19d9cb9ca8c2a86e\"" Sep 4 17:44:59.885607 systemd[1]: Started cri-containerd-1a97ce152058a7c72699b0bdac022a9cd426bac85616dd4a19d9cb9ca8c2a86e.scope - libcontainer container 1a97ce152058a7c72699b0bdac022a9cd426bac85616dd4a19d9cb9ca8c2a86e. Sep 4 17:44:59.911961 containerd[1439]: time="2024-09-04T17:44:59.911905053Z" level=info msg="StartContainer for \"1a97ce152058a7c72699b0bdac022a9cd426bac85616dd4a19d9cb9ca8c2a86e\" returns successfully" Sep 4 17:45:00.192535 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 17:45:00.826898 kubelet[2532]: E0904 17:45:00.826859 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:45:00.841837 kubelet[2532]: I0904 17:45:00.841764 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-72vwk" podStartSLOduration=5.841747763 podStartE2EDuration="5.841747763s" podCreationTimestamp="2024-09-04 17:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:45:00.841483325 +0000 UTC m=+79.353316720" watchObservedRunningTime="2024-09-04 17:45:00.841747763 +0000 UTC m=+79.353581118" Sep 4 17:45:01.828814 kubelet[2532]: E0904 17:45:01.828758 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:45:03.168123 systemd-networkd[1371]: lxc_health: Link UP Sep 4 17:45:03.176585 systemd-networkd[1371]: lxc_health: Gained carrier Sep 4 17:45:03.809146 kubelet[2532]: E0904 17:45:03.809090 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:45:03.832660 kubelet[2532]: E0904 17:45:03.832548 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:45:04.570210 kubelet[2532]: E0904 17:45:04.570168 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:45:04.834753 kubelet[2532]: E0904 17:45:04.834366 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:45:05.211545 systemd-networkd[1371]: lxc_health: Gained IPv6LL Sep 4 17:45:10.611168 sshd[4345]: pam_unix(sshd:session): session closed for user core Sep 4 17:45:10.616757 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:51414.service: Deactivated successfully. Sep 4 17:45:10.620195 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:45:10.621256 systemd-logind[1415]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:45:10.622183 systemd-logind[1415]: Removed session 25. Sep 4 17:45:11.570071 kubelet[2532]: E0904 17:45:11.570036 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"