Sep 4 17:21:51.911838 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:21:51.911858 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:21:51.911868 kernel: KASLR enabled Sep 4 17:21:51.911874 kernel: efi: EFI v2.7 by EDK II Sep 4 17:21:51.911879 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:21:51.911885 kernel: random: crng init done Sep 4 17:21:51.911892 kernel: ACPI: Early table checksum verification disabled Sep 4 17:21:51.911898 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:21:51.911904 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:21:51.911911 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911917 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911923 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911929 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911935 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911942 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911950 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911956 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911963 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:21:51.911969 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:21:51.911975 kernel: NUMA: Failed to initialise from firmware Sep 4 17:21:51.911981 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:21:51.911987 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 4 17:21:51.911994 kernel: Zone ranges: Sep 4 17:21:51.912000 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:21:51.912006 kernel: DMA32 empty Sep 4 17:21:51.912013 kernel: Normal empty Sep 4 17:21:51.912019 kernel: Movable zone start for each node Sep 4 17:21:51.912026 kernel: Early memory node ranges Sep 4 17:21:51.912032 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:21:51.912038 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:21:51.912044 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:21:51.912051 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:21:51.912057 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:21:51.912063 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:21:51.912069 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:21:51.912076 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:21:51.912082 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:21:51.912089 kernel: psci: probing for conduit method from ACPI. Sep 4 17:21:51.912095 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:21:51.912102 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:21:51.912111 kernel: psci: Trusted OS migration not required Sep 4 17:21:51.912117 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:21:51.912124 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:21:51.912132 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:21:51.912139 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:21:51.912146 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:21:51.912152 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:21:51.912159 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:21:51.912166 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:21:51.912173 kernel: CPU features: detected: Spectre-v4 Sep 4 17:21:51.912179 kernel: CPU features: detected: Spectre-BHB Sep 4 17:21:51.912186 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:21:51.912193 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:21:51.912201 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:21:51.912207 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:21:51.912214 kernel: alternatives: applying boot alternatives Sep 4 17:21:51.912221 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:21:51.912229 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:21:51.912236 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:21:51.912242 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:21:51.912249 kernel: Fallback order for Node 0: 0 Sep 4 17:21:51.912256 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:21:51.912262 kernel: Policy zone: DMA Sep 4 17:21:51.912269 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:21:51.912277 kernel: software IO TLB: area num 4. Sep 4 17:21:51.912283 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:21:51.912291 kernel: Memory: 2386596K/2572288K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 185692K reserved, 0K cma-reserved) Sep 4 17:21:51.912297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:21:51.912304 kernel: trace event string verifier disabled Sep 4 17:21:51.912311 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:21:51.912318 kernel: rcu: RCU event tracing is enabled. Sep 4 17:21:51.912325 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:21:51.912332 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:21:51.912338 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:21:51.912345 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:21:51.912352 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:21:51.912360 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:21:51.912366 kernel: GICv3: 256 SPIs implemented Sep 4 17:21:51.912373 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:21:51.912380 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:21:51.912386 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:21:51.912393 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:21:51.912400 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:21:51.912406 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:21:51.912413 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:21:51.912420 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:21:51.912427 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:21:51.912435 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:21:51.912442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:21:51.912448 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:21:51.912455 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:21:51.912462 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:21:51.912486 kernel: arm-pv: using stolen time PV Sep 4 17:21:51.912493 kernel: Console: colour dummy device 80x25 Sep 4 17:21:51.912500 kernel: ACPI: Core revision 20230628 Sep 4 17:21:51.912507 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:21:51.912514 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:21:51.912523 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:21:51.912530 kernel: landlock: Up and running. Sep 4 17:21:51.912536 kernel: SELinux: Initializing. Sep 4 17:21:51.912543 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:21:51.912550 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:21:51.912557 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:21:51.912564 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:21:51.912571 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:21:51.912577 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:21:51.912585 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:21:51.912592 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:21:51.912599 kernel: Remapping and enabling EFI services. Sep 4 17:21:51.912606 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:21:51.912612 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:21:51.912619 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:21:51.912626 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:21:51.912633 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:21:51.912640 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:21:51.912647 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:21:51.912655 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:21:51.912662 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:21:51.912673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:21:51.912682 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:21:51.912690 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:21:51.912697 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:21:51.912704 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:21:51.912711 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:21:51.912718 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:21:51.912727 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:21:51.912734 kernel: SMP: Total of 4 processors activated. Sep 4 17:21:51.912741 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:21:51.912749 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:21:51.912756 kernel: CPU features: detected: Common not Private translations Sep 4 17:21:51.912763 kernel: CPU features: detected: CRC32 instructions Sep 4 17:21:51.912771 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:21:51.912778 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:21:51.912786 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:21:51.912794 kernel: CPU features: detected: Privileged Access Never Sep 4 17:21:51.912801 kernel: CPU features: detected: RAS Extension Support Sep 4 17:21:51.912808 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:21:51.912815 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:21:51.912822 kernel: alternatives: applying system-wide alternatives Sep 4 17:21:51.912830 kernel: devtmpfs: initialized Sep 4 17:21:51.912837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:21:51.912844 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:21:51.912853 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:21:51.912860 kernel: SMBIOS 3.0.0 present. Sep 4 17:21:51.912868 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:21:51.912875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:21:51.912882 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:21:51.912890 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:21:51.912897 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:21:51.912904 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:21:51.912912 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 4 17:21:51.912920 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:21:51.912928 kernel: cpuidle: using governor menu Sep 4 17:21:51.912935 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:21:51.912942 kernel: ASID allocator initialised with 32768 entries Sep 4 17:21:51.912949 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:21:51.912956 kernel: Serial: AMBA PL011 UART driver Sep 4 17:21:51.912964 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:21:51.912971 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:21:51.912978 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:21:51.912987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:21:51.912994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:21:51.913001 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:21:51.913008 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:21:51.913016 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:21:51.913023 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:21:51.913030 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:21:51.913038 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:21:51.913045 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:21:51.913053 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:21:51.913060 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:21:51.913068 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:21:51.913075 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:21:51.913082 kernel: ACPI: Interpreter enabled Sep 4 17:21:51.913089 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:21:51.913097 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:21:51.913104 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:21:51.913111 kernel: printk: console [ttyAMA0] enabled Sep 4 17:21:51.913120 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:21:51.913257 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:21:51.913332 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:21:51.913398 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:21:51.913461 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:21:51.913565 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:21:51.913576 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:21:51.913588 kernel: PCI host bridge to bus 0000:00 Sep 4 17:21:51.913660 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:21:51.913719 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:21:51.913777 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:21:51.913834 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:21:51.913912 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:21:51.913987 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:21:51.914055 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:21:51.914121 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:21:51.914185 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:21:51.914250 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:21:51.914315 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:21:51.914380 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:21:51.914438 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:21:51.914521 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:21:51.914581 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:21:51.914591 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:21:51.914598 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:21:51.914606 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:21:51.914613 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:21:51.914621 kernel: iommu: Default domain type: Translated Sep 4 17:21:51.914628 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:21:51.914637 kernel: efivars: Registered efivars operations Sep 4 17:21:51.914645 kernel: vgaarb: loaded Sep 4 17:21:51.914652 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:21:51.914659 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:21:51.914667 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:21:51.914674 kernel: pnp: PnP ACPI init Sep 4 17:21:51.914749 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:21:51.914760 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:21:51.914769 kernel: NET: Registered PF_INET protocol family Sep 4 17:21:51.914777 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:21:51.914784 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:21:51.914791 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:21:51.914799 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:21:51.914806 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:21:51.914814 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:21:51.914821 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:21:51.914829 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:21:51.914837 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:21:51.914845 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:21:51.914852 kernel: kvm [1]: HYP mode not available Sep 4 17:21:51.914859 kernel: Initialise system trusted keyrings Sep 4 17:21:51.914867 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:21:51.914874 kernel: Key type asymmetric registered Sep 4 17:21:51.914881 kernel: Asymmetric key parser 'x509' registered Sep 4 17:21:51.914889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:21:51.914896 kernel: io scheduler mq-deadline registered Sep 4 17:21:51.914904 kernel: io scheduler kyber registered Sep 4 17:21:51.914912 kernel: io scheduler bfq registered Sep 4 17:21:51.914919 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:21:51.914927 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:21:51.914934 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:21:51.915000 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:21:51.915010 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:21:51.915017 kernel: thunder_xcv, ver 1.0 Sep 4 17:21:51.915025 kernel: thunder_bgx, ver 1.0 Sep 4 17:21:51.915033 kernel: nicpf, ver 1.0 Sep 4 17:21:51.915041 kernel: nicvf, ver 1.0 Sep 4 17:21:51.915114 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:21:51.915177 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:21:51 UTC (1725470511) Sep 4 17:21:51.915187 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:21:51.915195 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:21:51.915202 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:21:51.915209 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:21:51.915218 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:21:51.915226 kernel: Segment Routing with IPv6 Sep 4 17:21:51.915233 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:21:51.915240 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:21:51.915247 kernel: Key type dns_resolver registered Sep 4 17:21:51.915255 kernel: registered taskstats version 1 Sep 4 17:21:51.915262 kernel: Loading compiled-in X.509 certificates Sep 4 17:21:51.915269 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:21:51.915276 kernel: Key type .fscrypt registered Sep 4 17:21:51.915285 kernel: Key type fscrypt-provisioning registered Sep 4 17:21:51.915292 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:21:51.915300 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:21:51.915307 kernel: ima: No architecture policies found Sep 4 17:21:51.915314 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:21:51.915322 kernel: clk: Disabling unused clocks Sep 4 17:21:51.915329 kernel: Freeing unused kernel memory: 39296K Sep 4 17:21:51.915336 kernel: Run /init as init process Sep 4 17:21:51.915343 kernel: with arguments: Sep 4 17:21:51.915352 kernel: /init Sep 4 17:21:51.915359 kernel: with environment: Sep 4 17:21:51.915366 kernel: HOME=/ Sep 4 17:21:51.915373 kernel: TERM=linux Sep 4 17:21:51.915380 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:21:51.915389 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:21:51.915399 systemd[1]: Detected virtualization kvm. Sep 4 17:21:51.915407 systemd[1]: Detected architecture arm64. Sep 4 17:21:51.915416 systemd[1]: Running in initrd. Sep 4 17:21:51.915423 systemd[1]: No hostname configured, using default hostname. Sep 4 17:21:51.915431 systemd[1]: Hostname set to . Sep 4 17:21:51.915439 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:21:51.915447 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:21:51.915455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:21:51.915470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:21:51.915488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:21:51.915499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:21:51.915507 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:21:51.915515 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:21:51.915525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:21:51.915533 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:21:51.915541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:21:51.915549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:21:51.915559 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:21:51.915567 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:21:51.915574 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:21:51.915582 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:21:51.915590 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:21:51.915598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:21:51.915606 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:21:51.915614 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:21:51.915623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:21:51.915631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:21:51.915639 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:21:51.915646 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:21:51.915654 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:21:51.915662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:21:51.915670 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:21:51.915678 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:21:51.915685 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:21:51.915694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:21:51.915703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:21:51.915710 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:21:51.915718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:21:51.915726 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:21:51.915734 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:21:51.915762 systemd-journald[237]: Collecting audit messages is disabled. Sep 4 17:21:51.915781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:21:51.915792 systemd-journald[237]: Journal started Sep 4 17:21:51.915810 systemd-journald[237]: Runtime Journal (/run/log/journal/dba23e1c12b64fc18a91e7106fef6e02) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:21:51.922573 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:21:51.922600 kernel: Bridge firewalling registered Sep 4 17:21:51.906309 systemd-modules-load[238]: Inserted module 'overlay' Sep 4 17:21:51.921809 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 4 17:21:51.927373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:21:51.927394 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:21:51.928603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:21:51.930526 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:21:51.940617 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:21:51.942084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:21:51.946524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:21:51.947756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:21:51.950279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:21:51.952504 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:21:51.955666 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:21:51.959582 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:21:51.962355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:21:51.969342 dracut-cmdline[275]: dracut-dracut-053 Sep 4 17:21:51.972504 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:21:51.995870 systemd-resolved[280]: Positive Trust Anchors: Sep 4 17:21:51.995890 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:21:51.995922 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:21:52.000728 systemd-resolved[280]: Defaulting to hostname 'linux'. Sep 4 17:21:52.001683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:21:52.004545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:21:52.042507 kernel: SCSI subsystem initialized Sep 4 17:21:52.047491 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:21:52.055506 kernel: iscsi: registered transport (tcp) Sep 4 17:21:52.070541 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:21:52.070601 kernel: QLogic iSCSI HBA Driver Sep 4 17:21:52.112598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:21:52.122624 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:21:52.137501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:21:52.137569 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:21:52.139093 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:21:52.188518 kernel: raid6: neonx8 gen() 15662 MB/s Sep 4 17:21:52.205496 kernel: raid6: neonx4 gen() 15650 MB/s Sep 4 17:21:52.222496 kernel: raid6: neonx2 gen() 13226 MB/s Sep 4 17:21:52.239489 kernel: raid6: neonx1 gen() 10501 MB/s Sep 4 17:21:52.256490 kernel: raid6: int64x8 gen() 6971 MB/s Sep 4 17:21:52.273491 kernel: raid6: int64x4 gen() 7346 MB/s Sep 4 17:21:52.290496 kernel: raid6: int64x2 gen() 6123 MB/s Sep 4 17:21:52.307541 kernel: raid6: int64x1 gen() 5055 MB/s Sep 4 17:21:52.307576 kernel: raid6: using algorithm neonx8 gen() 15662 MB/s Sep 4 17:21:52.325587 kernel: raid6: .... xor() 11920 MB/s, rmw enabled Sep 4 17:21:52.325602 kernel: raid6: using neon recovery algorithm Sep 4 17:21:52.331746 kernel: xor: measuring software checksum speed Sep 4 17:21:52.331763 kernel: 8regs : 19849 MB/sec Sep 4 17:21:52.332753 kernel: 32regs : 19716 MB/sec Sep 4 17:21:52.333603 kernel: arm64_neon : 27215 MB/sec Sep 4 17:21:52.333616 kernel: xor: using function: arm64_neon (27215 MB/sec) Sep 4 17:21:52.384518 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:21:52.395139 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:21:52.407641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:21:52.419697 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 4 17:21:52.422909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:21:52.432627 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:21:52.447255 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 4 17:21:52.481656 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:21:52.491787 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:21:52.534056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:21:52.541666 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:21:52.557075 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:21:52.560374 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:21:52.561721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:21:52.564257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:21:52.574617 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:21:52.585550 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:21:52.593679 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:21:52.593858 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:21:52.597494 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:21:52.597537 kernel: GPT:9289727 != 19775487 Sep 4 17:21:52.597548 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:21:52.597557 kernel: GPT:9289727 != 19775487 Sep 4 17:21:52.597566 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:21:52.599098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:21:52.602029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:21:52.602145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:21:52.608405 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:21:52.610323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:21:52.615752 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (523) Sep 4 17:21:52.610440 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:21:52.614712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:21:52.622496 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (510) Sep 4 17:21:52.628632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:21:52.639012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:21:52.647664 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:21:52.652073 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:21:52.655888 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:21:52.657094 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:21:52.663409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:21:52.675644 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:21:52.677385 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:21:52.683722 disk-uuid[554]: Primary Header is updated. Sep 4 17:21:52.683722 disk-uuid[554]: Secondary Entries is updated. Sep 4 17:21:52.683722 disk-uuid[554]: Secondary Header is updated. Sep 4 17:21:52.686824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:21:52.702796 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:21:53.704496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:21:53.705395 disk-uuid[556]: The operation has completed successfully. Sep 4 17:21:53.732213 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:21:53.732314 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:21:53.754672 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:21:53.757624 sh[579]: Success Sep 4 17:21:53.774263 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:21:53.834981 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:21:53.836899 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:21:53.837879 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:21:53.853204 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:21:53.853241 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:21:53.853252 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:21:53.854502 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:21:53.855712 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:21:53.862022 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:21:53.863731 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:21:53.880707 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:21:53.882364 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:21:53.894402 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:21:53.894452 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:21:53.894487 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:21:53.898525 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:21:53.908919 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:21:53.910715 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:21:53.917676 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:21:53.925661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:21:53.996603 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:21:54.013693 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:21:54.026713 ignition[676]: Ignition 2.19.0 Sep 4 17:21:54.026725 ignition[676]: Stage: fetch-offline Sep 4 17:21:54.026763 ignition[676]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:21:54.026772 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:21:54.026994 ignition[676]: parsed url from cmdline: "" Sep 4 17:21:54.026998 ignition[676]: no config URL provided Sep 4 17:21:54.027002 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:21:54.027009 ignition[676]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:21:54.027035 ignition[676]: op(1): [started] loading QEMU firmware config module Sep 4 17:21:54.027040 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:21:54.041977 systemd-networkd[769]: lo: Link UP Sep 4 17:21:54.041989 systemd-networkd[769]: lo: Gained carrier Sep 4 17:21:54.042719 systemd-networkd[769]: Enumeration completed Sep 4 17:21:54.043470 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:21:54.043641 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:21:54.043644 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:21:54.045991 ignition[676]: op(1): [finished] loading QEMU firmware config module Sep 4 17:21:54.046688 systemd[1]: Reached target network.target - Network. Sep 4 17:21:54.048836 systemd-networkd[769]: eth0: Link UP Sep 4 17:21:54.048841 systemd-networkd[769]: eth0: Gained carrier Sep 4 17:21:54.048852 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:21:54.059525 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:21:54.095542 ignition[676]: parsing config with SHA512: 9c3570d7e8a8aad30253e01c25aca3c58b1a3a56112b13596e4620c6493252f59194602c411cd6df940b1fc6b81c8d4ceaee19ba82955ffa9d43a78533ae67d4 Sep 4 17:21:54.100001 unknown[676]: fetched base config from "system" Sep 4 17:21:54.100014 unknown[676]: fetched user config from "qemu" Sep 4 17:21:54.100435 ignition[676]: fetch-offline: fetch-offline passed Sep 4 17:21:54.102259 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:21:54.100530 ignition[676]: Ignition finished successfully Sep 4 17:21:54.103526 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:21:54.113704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:21:54.130485 ignition[776]: Ignition 2.19.0 Sep 4 17:21:54.130497 ignition[776]: Stage: kargs Sep 4 17:21:54.133983 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:21:54.130687 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:21:54.130697 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:21:54.131796 ignition[776]: kargs: kargs passed Sep 4 17:21:54.131851 ignition[776]: Ignition finished successfully Sep 4 17:21:54.142732 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:21:54.154973 ignition[785]: Ignition 2.19.0 Sep 4 17:21:54.154984 ignition[785]: Stage: disks Sep 4 17:21:54.155162 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:21:54.157990 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:21:54.155172 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:21:54.159542 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:21:54.156108 ignition[785]: disks: disks passed Sep 4 17:21:54.161266 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:21:54.156166 ignition[785]: Ignition finished successfully Sep 4 17:21:54.163564 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:21:54.165606 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:21:54.167076 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:21:54.179660 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:21:54.190802 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:21:54.195868 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:21:54.205613 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:21:54.255513 kernel: EXT4-fs (vda9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:21:54.255773 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:21:54.257070 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:21:54.265565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:21:54.267900 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:21:54.268959 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:21:54.269002 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:21:54.269024 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:21:54.276223 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:21:54.277921 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:21:54.284280 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Sep 4 17:21:54.284304 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:21:54.284315 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:21:54.284333 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:21:54.290511 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:21:54.292036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:21:54.342503 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:21:54.347925 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:21:54.352742 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:21:54.356734 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:21:54.453693 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:21:54.464779 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:21:54.466621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:21:54.473483 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:21:54.490360 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:21:54.493316 ignition[917]: INFO : Ignition 2.19.0 Sep 4 17:21:54.493316 ignition[917]: INFO : Stage: mount Sep 4 17:21:54.495548 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:21:54.495548 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:21:54.495548 ignition[917]: INFO : mount: mount passed Sep 4 17:21:54.495548 ignition[917]: INFO : Ignition finished successfully Sep 4 17:21:54.496084 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:21:54.504616 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:21:54.852088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:21:54.860705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:21:54.867851 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Sep 4 17:21:54.871521 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:21:54.871558 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:21:54.871569 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:21:54.876503 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:21:54.877833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:21:54.894856 ignition[946]: INFO : Ignition 2.19.0 Sep 4 17:21:54.896732 ignition[946]: INFO : Stage: files Sep 4 17:21:54.896732 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:21:54.896732 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:21:54.899555 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:21:54.900992 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:21:54.900992 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:21:54.906778 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:21:54.908147 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:21:54.909411 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:21:54.908967 unknown[946]: wrote ssh authorized keys file for user: core Sep 4 17:21:54.912165 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:21:54.914154 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:21:54.963305 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:21:54.999296 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:21:54.999296 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:21:55.002928 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 17:21:55.424230 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:21:55.498274 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:21:55.498274 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:21:55.501910 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:21:55.759012 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:21:55.875608 systemd-networkd[769]: eth0: Gained IPv6LL Sep 4 17:21:55.959622 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:21:55.959622 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:21:55.963383 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:21:56.002583 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:21:56.006788 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:21:56.009304 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:21:56.009304 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:21:56.009304 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:21:56.009304 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:21:56.009304 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:21:56.009304 ignition[946]: INFO : files: files passed Sep 4 17:21:56.009304 ignition[946]: INFO : Ignition finished successfully Sep 4 17:21:56.011167 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:21:56.026720 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:21:56.030245 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:21:56.031658 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:21:56.031744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:21:56.038907 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:21:56.041632 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:21:56.041632 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:21:56.044618 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:21:56.045516 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:21:56.047416 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:21:56.060669 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:21:56.081759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:21:56.082556 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:21:56.084035 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:21:56.086046 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:21:56.087977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:21:56.098597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:21:56.111555 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:21:56.123669 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:21:56.134566 systemd[1]: Stopped target network.target - Network. Sep 4 17:21:56.135545 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:21:56.137936 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:21:56.142003 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:21:56.144229 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:21:56.144350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:21:56.147796 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:21:56.150516 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:21:56.152110 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:21:56.154357 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:21:56.156485 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:21:56.159689 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:21:56.162030 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:21:56.165294 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:21:56.167168 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:21:56.168957 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:21:56.170438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:21:56.170592 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:21:56.173042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:21:56.175107 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:21:56.177211 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:21:56.177311 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:21:56.179518 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:21:56.179640 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:21:56.182680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:21:56.182798 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:21:56.184771 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:21:56.186503 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:21:56.187596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:21:56.188935 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:21:56.190821 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:21:56.192583 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:21:56.192668 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:21:56.194384 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:21:56.194484 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:21:56.196579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:21:56.196687 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:21:56.198541 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:21:56.198645 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:21:56.210651 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:21:56.212243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:21:56.213425 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:21:56.215322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:21:56.217298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:21:56.217427 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:21:56.219623 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:21:56.219727 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:21:56.225669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:21:56.227068 ignition[1001]: INFO : Ignition 2.19.0 Sep 4 17:21:56.227068 ignition[1001]: INFO : Stage: umount Sep 4 17:21:56.226389 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:21:56.233187 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:21:56.233187 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:21:56.233187 ignition[1001]: INFO : umount: umount passed Sep 4 17:21:56.233187 ignition[1001]: INFO : Ignition finished successfully Sep 4 17:21:56.226509 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:21:56.226824 systemd-networkd[769]: eth0: DHCPv6 lease lost Sep 4 17:21:56.230713 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:21:56.230824 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:21:56.232865 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:21:56.232946 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:21:56.238091 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:21:56.238175 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:21:56.241353 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:21:56.241386 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:21:56.243158 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:21:56.243209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:21:56.245151 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:21:56.245195 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:21:56.247109 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:21:56.247151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:21:56.248819 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:21:56.248862 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:21:56.261575 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:21:56.263141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:21:56.263199 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:21:56.265163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:21:56.265206 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:21:56.267203 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:21:56.267244 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:21:56.269574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:21:56.269619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:21:56.271654 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:21:56.282065 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:21:56.282209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:21:56.288828 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:21:56.288924 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:21:56.291917 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:21:56.291964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:21:56.294124 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:21:56.294254 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:21:56.296415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:21:56.296461 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:21:56.298014 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:21:56.298045 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:21:56.300129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:21:56.300175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:21:56.302715 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:21:56.302758 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:21:56.305132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:21:56.305171 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:21:56.314627 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:21:56.315625 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:21:56.315683 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:21:56.317774 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:21:56.317817 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:21:56.319662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:21:56.319699 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:21:56.321671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:21:56.321710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:21:56.323927 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:21:56.324015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:21:56.326332 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:21:56.328629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:21:56.337516 systemd[1]: Switching root. Sep 4 17:21:56.367576 systemd-journald[237]: Journal stopped Sep 4 17:21:57.111003 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 4 17:21:57.111085 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:21:57.111103 kernel: SELinux: policy capability open_perms=1 Sep 4 17:21:57.111119 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:21:57.111145 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:21:57.111158 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:21:57.111168 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:21:57.111178 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:21:57.111187 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:21:57.111197 kernel: audit: type=1403 audit(1725470516.523:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:21:57.111207 systemd[1]: Successfully loaded SELinux policy in 33.615ms. Sep 4 17:21:57.111224 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.554ms. Sep 4 17:21:57.111250 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:21:57.111261 systemd[1]: Detected virtualization kvm. Sep 4 17:21:57.111271 systemd[1]: Detected architecture arm64. Sep 4 17:21:57.111285 systemd[1]: Detected first boot. Sep 4 17:21:57.111295 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:21:57.111306 zram_generator::config[1046]: No configuration found. Sep 4 17:21:57.111317 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:21:57.111328 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:21:57.111340 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:21:57.111352 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:21:57.111363 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:21:57.111374 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:21:57.111386 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:21:57.111396 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:21:57.111408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:21:57.111420 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:21:57.111432 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:21:57.111448 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:21:57.111462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:21:57.111514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:21:57.111529 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:21:57.111540 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:21:57.111551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:21:57.111561 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:21:57.111572 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:21:57.111586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:21:57.111596 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:21:57.111607 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:21:57.111618 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:21:57.111629 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:21:57.111640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:21:57.111651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:21:57.111662 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:21:57.111675 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:21:57.111685 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:21:57.111698 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:21:57.111709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:21:57.111720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:21:57.111730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:21:57.111741 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:21:57.111752 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:21:57.111763 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:21:57.111775 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:21:57.111787 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:21:57.111798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:21:57.111808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:21:57.111819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:21:57.111831 systemd[1]: Reached target machines.target - Containers. Sep 4 17:21:57.111842 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:21:57.111853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:21:57.111863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:21:57.111875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:21:57.111886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:21:57.111897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:21:57.111907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:21:57.111918 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:21:57.111930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:21:57.111940 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:21:57.111951 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:21:57.111963 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:21:57.111974 kernel: fuse: init (API version 7.39) Sep 4 17:21:57.111984 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:21:57.111994 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:21:57.112004 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:21:57.112015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:21:57.112024 kernel: loop: module loaded Sep 4 17:21:57.112035 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:21:57.112045 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:21:57.112077 systemd-journald[1112]: Collecting audit messages is disabled. Sep 4 17:21:57.112103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:21:57.112114 kernel: ACPI: bus type drm_connector registered Sep 4 17:21:57.112125 systemd-journald[1112]: Journal started Sep 4 17:21:57.112147 systemd-journald[1112]: Runtime Journal (/run/log/journal/dba23e1c12b64fc18a91e7106fef6e02) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:21:56.900838 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:21:56.918047 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:21:56.918421 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:21:57.113597 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:21:57.114494 systemd[1]: Stopped verity-setup.service. Sep 4 17:21:57.119067 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:21:57.119817 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:21:57.121036 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:21:57.122361 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:21:57.123499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:21:57.124716 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:21:57.125813 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:21:57.128502 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:21:57.129926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:21:57.132816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:21:57.132960 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:21:57.134386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:21:57.134574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:21:57.136038 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:21:57.136179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:21:57.137688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:21:57.137838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:21:57.140824 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:21:57.140957 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:21:57.142285 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:21:57.142416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:21:57.143792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:21:57.146808 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:21:57.148272 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:21:57.162883 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:21:57.172577 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:21:57.174663 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:21:57.175775 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:21:57.175817 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:21:57.177742 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:21:57.180004 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:21:57.182288 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:21:57.183469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:21:57.184699 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:21:57.186787 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:21:57.188016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:21:57.192663 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:21:57.195523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:21:57.195663 systemd-journald[1112]: Time spent on flushing to /var/log/journal/dba23e1c12b64fc18a91e7106fef6e02 is 16.958ms for 858 entries. Sep 4 17:21:57.195663 systemd-journald[1112]: System Journal (/var/log/journal/dba23e1c12b64fc18a91e7106fef6e02) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:21:57.236551 systemd-journald[1112]: Received client request to flush runtime journal. Sep 4 17:21:57.236610 kernel: loop0: detected capacity change from 0 to 114288 Sep 4 17:21:57.196721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:21:57.201021 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:21:57.204206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:21:57.209215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:21:57.211399 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:21:57.214867 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:21:57.216315 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:21:57.218308 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:21:57.222440 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:21:57.231034 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:21:57.233276 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:21:57.234968 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:21:57.238458 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:21:57.247609 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:21:57.257121 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 4 17:21:57.257137 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 4 17:21:57.258036 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:21:57.262185 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:21:57.262810 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:21:57.264867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:21:57.272679 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:21:57.288504 kernel: loop1: detected capacity change from 0 to 193208 Sep 4 17:21:57.295111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:21:57.307658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:21:57.319550 kernel: loop2: detected capacity change from 0 to 65520 Sep 4 17:21:57.319788 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 4 17:21:57.319805 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Sep 4 17:21:57.324075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:21:57.354516 kernel: loop3: detected capacity change from 0 to 114288 Sep 4 17:21:57.360493 kernel: loop4: detected capacity change from 0 to 193208 Sep 4 17:21:57.368496 kernel: loop5: detected capacity change from 0 to 65520 Sep 4 17:21:57.371728 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:21:57.372148 (sd-merge)[1183]: Merged extensions into '/usr'. Sep 4 17:21:57.377288 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:21:57.377306 systemd[1]: Reloading... Sep 4 17:21:57.433858 zram_generator::config[1207]: No configuration found. Sep 4 17:21:57.523176 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:21:57.535344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:21:57.571762 systemd[1]: Reloading finished in 194 ms. Sep 4 17:21:57.605290 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:21:57.606852 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:21:57.617652 systemd[1]: Starting ensure-sysext.service... Sep 4 17:21:57.625056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:21:57.634998 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:21:57.635012 systemd[1]: Reloading... Sep 4 17:21:57.654563 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:21:57.654878 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:21:57.655536 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:21:57.655801 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 4 17:21:57.655851 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 4 17:21:57.659124 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:21:57.659133 systemd-tmpfiles[1243]: Skipping /boot Sep 4 17:21:57.669897 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:21:57.669915 systemd-tmpfiles[1243]: Skipping /boot Sep 4 17:21:57.691512 zram_generator::config[1268]: No configuration found. Sep 4 17:21:57.775400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:21:57.811126 systemd[1]: Reloading finished in 175 ms. Sep 4 17:21:57.823671 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:21:57.833916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:21:57.841216 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:21:57.843879 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:21:57.851417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:21:57.856557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:21:57.863696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:21:57.867840 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:21:57.876847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:21:57.887248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:21:57.896171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:21:57.899038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:21:57.900791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:21:57.906421 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:21:57.907901 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Sep 4 17:21:57.908916 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:21:57.911631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:21:57.911799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:21:57.913438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:21:57.913603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:21:57.915586 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:21:57.915734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:21:57.926609 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:21:57.930206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:21:57.943949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:21:57.946714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:21:57.948842 augenrules[1339]: No rules Sep 4 17:21:57.948973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:21:57.950221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:21:57.952400 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:21:57.954000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:21:57.955708 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:21:57.958560 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:21:57.960334 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:21:57.960656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:21:57.967112 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:21:57.968915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:21:57.969125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:21:57.975940 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:21:57.977538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:21:57.977670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:21:57.986194 systemd[1]: Finished ensure-sysext.service. Sep 4 17:21:57.993715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:21:58.000671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:21:58.003578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:21:58.004705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:21:58.010088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:21:58.011111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:21:58.013929 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:21:58.016152 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:21:58.016537 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:21:58.016688 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:21:58.020249 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:21:58.023267 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:21:58.023453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:21:58.025389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:21:58.025625 systemd-resolved[1309]: Positive Trust Anchors: Sep 4 17:21:58.025637 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:21:58.025668 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:21:58.034518 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1355) Sep 4 17:21:58.035925 systemd-resolved[1309]: Defaulting to hostname 'linux'. Sep 4 17:21:58.038001 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:21:58.039719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:21:58.052495 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1355) Sep 4 17:21:58.066527 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1367) Sep 4 17:21:58.074380 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:21:58.076878 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:21:58.105238 systemd-networkd[1376]: lo: Link UP Sep 4 17:21:58.105253 systemd-networkd[1376]: lo: Gained carrier Sep 4 17:21:58.106271 systemd-networkd[1376]: Enumeration completed Sep 4 17:21:58.106735 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:21:58.108509 systemd[1]: Reached target network.target - Network. Sep 4 17:21:58.109251 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:21:58.109338 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:21:58.111651 systemd-networkd[1376]: eth0: Link UP Sep 4 17:21:58.111759 systemd-networkd[1376]: eth0: Gained carrier Sep 4 17:21:58.111819 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:21:58.128708 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:21:58.132545 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:21:58.133088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:21:58.133361 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Sep 4 17:21:58.134872 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:21:58.134915 systemd-timesyncd[1377]: Initial clock synchronization to Wed 2024-09-04 17:21:57.992948 UTC. Sep 4 17:21:58.138657 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:21:58.141597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:21:58.144508 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:21:58.149111 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:21:58.153158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:21:58.191710 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:21:58.226685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:21:58.228265 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:21:58.231398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:21:58.232618 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:21:58.233790 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:21:58.235010 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:21:58.236454 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:21:58.237677 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:21:58.238973 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:21:58.240251 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:21:58.240311 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:21:58.241252 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:21:58.244048 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:21:58.246389 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:21:58.255419 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:21:58.257709 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:21:58.259354 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:21:58.260570 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:21:58.261566 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:21:58.262570 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:21:58.262602 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:21:58.265861 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:21:58.267983 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:21:58.270605 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:21:58.271636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:21:58.275802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:21:58.277044 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:21:58.281624 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:21:58.286640 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:21:58.287742 jq[1408]: false Sep 4 17:21:58.291662 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:21:58.298407 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:21:58.305224 extend-filesystems[1409]: Found loop3 Sep 4 17:21:58.305224 extend-filesystems[1409]: Found loop4 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found loop5 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda1 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda2 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda3 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found usr Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda4 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda6 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda7 Sep 4 17:21:58.307837 extend-filesystems[1409]: Found vda9 Sep 4 17:21:58.307837 extend-filesystems[1409]: Checking size of /dev/vda9 Sep 4 17:21:58.305705 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:21:58.308237 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:21:58.308708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:21:58.309623 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:21:58.314621 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:21:58.321575 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:21:58.321871 dbus-daemon[1407]: [system] SELinux support is enabled Sep 4 17:21:58.323065 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:21:58.326935 jq[1425]: true Sep 4 17:21:58.327221 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:21:58.327786 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:21:58.328129 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:21:58.328278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:21:58.330935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:21:58.331792 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:21:58.332700 extend-filesystems[1409]: Resized partition /dev/vda9 Sep 4 17:21:58.345595 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:21:58.352197 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:21:58.352242 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:21:58.357404 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:21:58.357425 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:21:58.361531 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:21:58.361578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1362) Sep 4 17:21:58.364644 jq[1433]: true Sep 4 17:21:58.369143 (ntainerd)[1435]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:21:58.380357 tar[1431]: linux-arm64/helm Sep 4 17:21:58.383669 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:21:58.386670 systemd-logind[1420]: New seat seat0. Sep 4 17:21:58.387293 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:21:58.395497 update_engine[1424]: I0904 17:21:58.393262 1424 main.cc:92] Flatcar Update Engine starting Sep 4 17:21:58.403198 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:21:58.418574 update_engine[1424]: I0904 17:21:58.405959 1424 update_check_scheduler.cc:74] Next update check in 3m28s Sep 4 17:21:58.406098 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:21:58.421530 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:21:58.421530 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:21:58.421530 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:21:58.419796 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:21:58.426642 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Sep 4 17:21:58.422266 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:21:58.422496 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:21:58.456008 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:21:58.460695 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:21:58.464834 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:21:58.481880 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:21:58.572720 containerd[1435]: time="2024-09-04T17:21:58.572576280Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:21:58.603895 containerd[1435]: time="2024-09-04T17:21:58.603841480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.605715 containerd[1435]: time="2024-09-04T17:21:58.605461760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:21:58.605715 containerd[1435]: time="2024-09-04T17:21:58.605511080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:21:58.605715 containerd[1435]: time="2024-09-04T17:21:58.605528400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:21:58.607078 containerd[1435]: time="2024-09-04T17:21:58.606909480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:21:58.607141 containerd[1435]: time="2024-09-04T17:21:58.607081080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607211 containerd[1435]: time="2024-09-04T17:21:58.607187600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607211 containerd[1435]: time="2024-09-04T17:21:58.607207920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607415 containerd[1435]: time="2024-09-04T17:21:58.607382280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607415 containerd[1435]: time="2024-09-04T17:21:58.607410400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607508 containerd[1435]: time="2024-09-04T17:21:58.607423560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607508 containerd[1435]: time="2024-09-04T17:21:58.607433640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607570 containerd[1435]: time="2024-09-04T17:21:58.607540360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607768 containerd[1435]: time="2024-09-04T17:21:58.607728120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607850 containerd[1435]: time="2024-09-04T17:21:58.607831960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:21:58.607879 containerd[1435]: time="2024-09-04T17:21:58.607850560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:21:58.607936 containerd[1435]: time="2024-09-04T17:21:58.607922200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:21:58.607978 containerd[1435]: time="2024-09-04T17:21:58.607966680Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:21:58.611984 containerd[1435]: time="2024-09-04T17:21:58.611940400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:21:58.612038 containerd[1435]: time="2024-09-04T17:21:58.612002000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:21:58.612038 containerd[1435]: time="2024-09-04T17:21:58.612018320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:21:58.612038 containerd[1435]: time="2024-09-04T17:21:58.612033920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:21:58.612108 containerd[1435]: time="2024-09-04T17:21:58.612048080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:21:58.612215 containerd[1435]: time="2024-09-04T17:21:58.612193840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612468440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612629600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612648240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612660960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612676040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612688760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612700640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612714240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612728680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612741640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612754120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612766440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612786600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613255 containerd[1435]: time="2024-09-04T17:21:58.612805720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612817800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612829320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612841160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612854760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612866440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612878360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612891160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612905560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612917440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612929000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612940960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612955640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612974600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612988520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.613569 containerd[1435]: time="2024-09-04T17:21:58.612999080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:21:58.614294 containerd[1435]: time="2024-09-04T17:21:58.614259400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:21:58.614390 containerd[1435]: time="2024-09-04T17:21:58.614373120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:21:58.614458 containerd[1435]: time="2024-09-04T17:21:58.614434720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:21:58.614541 containerd[1435]: time="2024-09-04T17:21:58.614522640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:21:58.614624 containerd[1435]: time="2024-09-04T17:21:58.614608040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.614684 containerd[1435]: time="2024-09-04T17:21:58.614671000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:21:58.614730 containerd[1435]: time="2024-09-04T17:21:58.614719480Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:21:58.614908 containerd[1435]: time="2024-09-04T17:21:58.614895320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:21:58.615343 containerd[1435]: time="2024-09-04T17:21:58.615281080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:21:58.615555 containerd[1435]: time="2024-09-04T17:21:58.615535440Z" level=info msg="Connect containerd service" Sep 4 17:21:58.615641 containerd[1435]: time="2024-09-04T17:21:58.615627480Z" level=info msg="using legacy CRI server" Sep 4 17:21:58.615687 containerd[1435]: time="2024-09-04T17:21:58.615675280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:21:58.615839 containerd[1435]: time="2024-09-04T17:21:58.615821080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:21:58.616595 containerd[1435]: time="2024-09-04T17:21:58.616565480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:21:58.616926 containerd[1435]: time="2024-09-04T17:21:58.616853960Z" level=info msg="Start subscribing containerd event" Sep 4 17:21:58.616926 containerd[1435]: time="2024-09-04T17:21:58.616913600Z" level=info msg="Start recovering state" Sep 4 17:21:58.616998 containerd[1435]: time="2024-09-04T17:21:58.616981080Z" level=info msg="Start event monitor" Sep 4 17:21:58.616998 containerd[1435]: time="2024-09-04T17:21:58.616991720Z" level=info msg="Start snapshots syncer" Sep 4 17:21:58.617035 containerd[1435]: time="2024-09-04T17:21:58.617001360Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:21:58.617035 containerd[1435]: time="2024-09-04T17:21:58.617008480Z" level=info msg="Start streaming server" Sep 4 17:21:58.617433 containerd[1435]: time="2024-09-04T17:21:58.617410160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:21:58.617580 containerd[1435]: time="2024-09-04T17:21:58.617563440Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:21:58.617683 containerd[1435]: time="2024-09-04T17:21:58.617671000Z" level=info msg="containerd successfully booted in 0.046700s" Sep 4 17:21:58.617765 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:21:58.754359 tar[1431]: linux-arm64/LICENSE Sep 4 17:21:58.754465 tar[1431]: linux-arm64/README.md Sep 4 17:21:58.767841 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:21:59.028147 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:21:59.047758 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:21:59.058810 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:21:59.064641 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:21:59.066498 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:21:59.069301 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:21:59.082352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:21:59.091816 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:21:59.094836 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:21:59.096251 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:22:00.035620 systemd-networkd[1376]: eth0: Gained IPv6LL Sep 4 17:22:00.037767 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:22:00.040071 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:22:00.055760 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:22:00.058273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:00.060432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:22:00.076651 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:22:00.076865 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:22:00.078990 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:22:00.081641 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:22:00.662741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:00.664348 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:22:00.666548 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:00.668570 systemd[1]: Startup finished in 558ms (kernel) + 4.817s (initrd) + 4.184s (userspace) = 9.560s. Sep 4 17:22:01.275789 kubelet[1521]: E0904 17:22:01.275704 1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:22:01.278766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:22:01.278919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:22:04.668167 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:22:04.669368 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:38238.service - OpenSSH per-connection server daemon (10.0.0.1:38238). Sep 4 17:22:04.736491 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 38238 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:04.738091 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:04.746028 systemd-logind[1420]: New session 1 of user core. Sep 4 17:22:04.747033 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:22:04.757694 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:22:04.766862 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:22:04.770043 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:22:04.776132 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:04.858368 systemd[1539]: Queued start job for default target default.target. Sep 4 17:22:04.866363 systemd[1539]: Created slice app.slice - User Application Slice. Sep 4 17:22:04.866406 systemd[1539]: Reached target paths.target - Paths. Sep 4 17:22:04.866418 systemd[1539]: Reached target timers.target - Timers. Sep 4 17:22:04.867672 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:22:04.877660 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:22:04.877719 systemd[1539]: Reached target sockets.target - Sockets. Sep 4 17:22:04.877731 systemd[1539]: Reached target basic.target - Basic System. Sep 4 17:22:04.877766 systemd[1539]: Reached target default.target - Main User Target. Sep 4 17:22:04.877791 systemd[1539]: Startup finished in 96ms. Sep 4 17:22:04.877968 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:22:04.879334 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:22:04.943404 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:38244.service - OpenSSH per-connection server daemon (10.0.0.1:38244). Sep 4 17:22:04.978451 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 38244 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:04.979624 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:04.983249 systemd-logind[1420]: New session 2 of user core. Sep 4 17:22:04.991617 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:22:05.043644 sshd[1550]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:05.058742 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:38244.service: Deactivated successfully. Sep 4 17:22:05.060711 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:22:05.062109 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:22:05.070756 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:38256.service - OpenSSH per-connection server daemon (10.0.0.1:38256). Sep 4 17:22:05.071727 systemd-logind[1420]: Removed session 2. Sep 4 17:22:05.102158 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 38256 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:05.103256 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:05.106821 systemd-logind[1420]: New session 3 of user core. Sep 4 17:22:05.115619 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:22:05.162399 sshd[1557]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:05.177661 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:38256.service: Deactivated successfully. Sep 4 17:22:05.179604 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:22:05.180806 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:22:05.181835 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:38264.service - OpenSSH per-connection server daemon (10.0.0.1:38264). Sep 4 17:22:05.183807 systemd-logind[1420]: Removed session 3. Sep 4 17:22:05.216620 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 38264 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:05.218344 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:05.221566 systemd-logind[1420]: New session 4 of user core. Sep 4 17:22:05.234599 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:22:05.285263 sshd[1564]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:05.302676 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:38264.service: Deactivated successfully. Sep 4 17:22:05.304523 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:22:05.305637 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:22:05.306648 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:38268.service - OpenSSH per-connection server daemon (10.0.0.1:38268). Sep 4 17:22:05.307375 systemd-logind[1420]: Removed session 4. Sep 4 17:22:05.340755 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:05.342046 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:05.345807 systemd-logind[1420]: New session 5 of user core. Sep 4 17:22:05.353650 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:22:05.413385 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:22:05.413688 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:22:05.440445 sudo[1574]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:05.443408 sshd[1571]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:05.458873 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:38268.service: Deactivated successfully. Sep 4 17:22:05.460167 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:22:05.462652 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:22:05.463830 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:38282.service - OpenSSH per-connection server daemon (10.0.0.1:38282). Sep 4 17:22:05.464583 systemd-logind[1420]: Removed session 5. Sep 4 17:22:05.499250 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 38282 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:05.500423 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:05.504361 systemd-logind[1420]: New session 6 of user core. Sep 4 17:22:05.523619 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:22:05.575386 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:22:05.575690 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:22:05.578400 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:05.582620 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:22:05.582860 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:22:05.595786 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:22:05.597756 auditctl[1586]: No rules Sep 4 17:22:05.599154 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:22:05.599335 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:22:05.600849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:22:05.628689 augenrules[1604]: No rules Sep 4 17:22:05.630530 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:22:05.632281 sudo[1582]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:05.634214 sshd[1579]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:05.640671 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:38282.service: Deactivated successfully. Sep 4 17:22:05.641844 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:22:05.644111 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:22:05.646222 systemd-logind[1420]: Removed session 6. Sep 4 17:22:05.647181 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:38288.service - OpenSSH per-connection server daemon (10.0.0.1:38288). Sep 4 17:22:05.680850 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 38288 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:22:05.681965 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:22:05.685999 systemd-logind[1420]: New session 7 of user core. Sep 4 17:22:05.697625 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:22:05.748033 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:22:05.748295 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:22:05.854916 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:22:05.854978 (dockerd)[1626]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:22:06.149338 dockerd[1626]: time="2024-09-04T17:22:06.149216945Z" level=info msg="Starting up" Sep 4 17:22:06.313689 dockerd[1626]: time="2024-09-04T17:22:06.313623478Z" level=info msg="Loading containers: start." Sep 4 17:22:06.399539 kernel: Initializing XFRM netlink socket Sep 4 17:22:06.483946 systemd-networkd[1376]: docker0: Link UP Sep 4 17:22:06.502958 dockerd[1626]: time="2024-09-04T17:22:06.502853554Z" level=info msg="Loading containers: done." Sep 4 17:22:06.517076 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1919616369-merged.mount: Deactivated successfully. Sep 4 17:22:06.519705 dockerd[1626]: time="2024-09-04T17:22:06.519660584Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:22:06.519797 dockerd[1626]: time="2024-09-04T17:22:06.519779390Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:22:06.519912 dockerd[1626]: time="2024-09-04T17:22:06.519885866Z" level=info msg="Daemon has completed initialization" Sep 4 17:22:06.564534 dockerd[1626]: time="2024-09-04T17:22:06.564456117Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:22:06.564915 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:22:07.201301 containerd[1435]: time="2024-09-04T17:22:07.201255106Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:22:07.874960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890882810.mount: Deactivated successfully. Sep 4 17:22:09.304827 containerd[1435]: time="2024-09-04T17:22:09.304146490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:09.304827 containerd[1435]: time="2024-09-04T17:22:09.304814251Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599024" Sep 4 17:22:09.306383 containerd[1435]: time="2024-09-04T17:22:09.306341437Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:09.309466 containerd[1435]: time="2024-09-04T17:22:09.309419181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:09.310834 containerd[1435]: time="2024-09-04T17:22:09.310603211Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.109303576s" Sep 4 17:22:09.310834 containerd[1435]: time="2024-09-04T17:22:09.310640158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:22:09.330621 containerd[1435]: time="2024-09-04T17:22:09.330583605Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:22:10.963744 containerd[1435]: time="2024-09-04T17:22:10.963663508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:10.965424 containerd[1435]: time="2024-09-04T17:22:10.965387641Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019498" Sep 4 17:22:10.966490 containerd[1435]: time="2024-09-04T17:22:10.966451344Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:10.969726 containerd[1435]: time="2024-09-04T17:22:10.969693750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:10.971104 containerd[1435]: time="2024-09-04T17:22:10.970950648Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 1.64032392s" Sep 4 17:22:10.971104 containerd[1435]: time="2024-09-04T17:22:10.970993062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:22:10.994046 containerd[1435]: time="2024-09-04T17:22:10.993968841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:22:11.496936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:22:11.506666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:11.597984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:11.601273 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:11.660586 kubelet[1860]: E0904 17:22:11.660436 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:22:11.663556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:22:11.663916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:22:12.318554 containerd[1435]: time="2024-09-04T17:22:12.318083823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:12.318888 containerd[1435]: time="2024-09-04T17:22:12.318552142Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533683" Sep 4 17:22:12.320509 containerd[1435]: time="2024-09-04T17:22:12.320480845Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:12.323429 containerd[1435]: time="2024-09-04T17:22:12.323398426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:12.324697 containerd[1435]: time="2024-09-04T17:22:12.324644753Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.330630545s" Sep 4 17:22:12.324697 containerd[1435]: time="2024-09-04T17:22:12.324676670Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:22:12.343007 containerd[1435]: time="2024-09-04T17:22:12.342948964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:22:13.379079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838054083.mount: Deactivated successfully. Sep 4 17:22:13.707171 containerd[1435]: time="2024-09-04T17:22:13.707041822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:13.707908 containerd[1435]: time="2024-09-04T17:22:13.707869081Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977932" Sep 4 17:22:13.708786 containerd[1435]: time="2024-09-04T17:22:13.708736594Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:13.714251 containerd[1435]: time="2024-09-04T17:22:13.714205801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:13.715004 containerd[1435]: time="2024-09-04T17:22:13.714975945Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 1.371989644s" Sep 4 17:22:13.715078 containerd[1435]: time="2024-09-04T17:22:13.715007790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:22:13.733633 containerd[1435]: time="2024-09-04T17:22:13.733580608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:22:14.186104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462184671.mount: Deactivated successfully. Sep 4 17:22:14.191768 containerd[1435]: time="2024-09-04T17:22:14.191715727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:14.192286 containerd[1435]: time="2024-09-04T17:22:14.192251158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:22:14.193310 containerd[1435]: time="2024-09-04T17:22:14.193269517Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:14.195974 containerd[1435]: time="2024-09-04T17:22:14.195933358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:14.198797 containerd[1435]: time="2024-09-04T17:22:14.197838801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 464.198923ms" Sep 4 17:22:14.198797 containerd[1435]: time="2024-09-04T17:22:14.197880181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:22:14.220310 containerd[1435]: time="2024-09-04T17:22:14.220264162Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:22:14.768451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294871095.mount: Deactivated successfully. Sep 4 17:22:16.628265 containerd[1435]: time="2024-09-04T17:22:16.628204038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:16.629689 containerd[1435]: time="2024-09-04T17:22:16.629655450Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Sep 4 17:22:16.630813 containerd[1435]: time="2024-09-04T17:22:16.630776324Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:16.636736 containerd[1435]: time="2024-09-04T17:22:16.636694519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:16.637904 containerd[1435]: time="2024-09-04T17:22:16.637457813Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.417153982s" Sep 4 17:22:16.637904 containerd[1435]: time="2024-09-04T17:22:16.637505871Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:22:16.658656 containerd[1435]: time="2024-09-04T17:22:16.658547420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:22:17.264616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131607671.mount: Deactivated successfully. Sep 4 17:22:17.591597 containerd[1435]: time="2024-09-04T17:22:17.591426589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:17.592212 containerd[1435]: time="2024-09-04T17:22:17.592183563Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Sep 4 17:22:17.592945 containerd[1435]: time="2024-09-04T17:22:17.592903680Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:17.597116 containerd[1435]: time="2024-09-04T17:22:17.597068653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:17.598442 containerd[1435]: time="2024-09-04T17:22:17.598268515Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 939.67846ms" Sep 4 17:22:17.598442 containerd[1435]: time="2024-09-04T17:22:17.598301623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:22:21.747081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:22:21.756896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:21.938226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:21.943038 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:21.986070 kubelet[2044]: E0904 17:22:21.985999 2044 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:22:21.989059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:22:21.989203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:22:22.421661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:22.434737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:22.454263 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Sep 4 17:22:22.454281 systemd[1]: Reloading... Sep 4 17:22:22.514640 zram_generator::config[2096]: No configuration found. Sep 4 17:22:22.740812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:22:22.793229 systemd[1]: Reloading finished in 338 ms. Sep 4 17:22:22.834595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:22:22.834666 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:22:22.834907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:22.837011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:22.929090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:22.933006 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:22:22.975208 kubelet[2142]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:22:22.975208 kubelet[2142]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:22:22.975208 kubelet[2142]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:22:22.978199 kubelet[2142]: I0904 17:22:22.977553 2142 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:22:24.355804 kubelet[2142]: I0904 17:22:24.355760 2142 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:22:24.355804 kubelet[2142]: I0904 17:22:24.355790 2142 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:22:24.356158 kubelet[2142]: I0904 17:22:24.355991 2142 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:22:24.402741 kubelet[2142]: I0904 17:22:24.402494 2142 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:22:24.405073 kubelet[2142]: E0904 17:22:24.405039 2142 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.410465 kubelet[2142]: W0904 17:22:24.410434 2142 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:22:24.412940 kubelet[2142]: I0904 17:22:24.412914 2142 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:22:24.413152 kubelet[2142]: I0904 17:22:24.413131 2142 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:22:24.413333 kubelet[2142]: I0904 17:22:24.413306 2142 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:22:24.413423 kubelet[2142]: I0904 17:22:24.413336 2142 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:22:24.413423 kubelet[2142]: I0904 17:22:24.413353 2142 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:22:24.413555 kubelet[2142]: I0904 17:22:24.413542 2142 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:22:24.415229 kubelet[2142]: I0904 17:22:24.415202 2142 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:22:24.415229 kubelet[2142]: I0904 17:22:24.415232 2142 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:22:24.415862 kubelet[2142]: I0904 17:22:24.415318 2142 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:22:24.415862 kubelet[2142]: I0904 17:22:24.415333 2142 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:22:24.415862 kubelet[2142]: W0904 17:22:24.415732 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.415862 kubelet[2142]: E0904 17:22:24.415776 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.416070 kubelet[2142]: W0904 17:22:24.416027 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.416070 kubelet[2142]: E0904 17:22:24.416065 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.417004 kubelet[2142]: I0904 17:22:24.416739 2142 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:22:24.419486 kubelet[2142]: W0904 17:22:24.419447 2142 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:22:24.420172 kubelet[2142]: I0904 17:22:24.420142 2142 server.go:1232] "Started kubelet" Sep 4 17:22:24.422406 kubelet[2142]: I0904 17:22:24.421257 2142 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:22:24.422406 kubelet[2142]: I0904 17:22:24.422218 2142 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:22:24.423008 kubelet[2142]: I0904 17:22:24.422983 2142 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:22:24.424809 kubelet[2142]: I0904 17:22:24.424782 2142 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:22:24.426280 kubelet[2142]: I0904 17:22:24.426257 2142 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:22:24.426419 kubelet[2142]: I0904 17:22:24.426396 2142 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:22:24.426873 kubelet[2142]: I0904 17:22:24.426852 2142 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:22:24.426944 kubelet[2142]: I0904 17:22:24.426921 2142 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:22:24.428147 kubelet[2142]: E0904 17:22:24.427390 2142 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21a51768f9e22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 22, 24, 420118050, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 22, 24, 420118050, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.57:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.57:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:22:24.428147 kubelet[2142]: E0904 17:22:24.427538 2142 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Sep 4 17:22:24.428147 kubelet[2142]: E0904 17:22:24.427597 2142 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:22:24.428346 kubelet[2142]: E0904 17:22:24.427614 2142 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:22:24.428346 kubelet[2142]: W0904 17:22:24.428016 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.428346 kubelet[2142]: E0904 17:22:24.428061 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.440390 kubelet[2142]: I0904 17:22:24.440354 2142 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:22:24.441386 kubelet[2142]: I0904 17:22:24.441358 2142 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:22:24.441386 kubelet[2142]: I0904 17:22:24.441383 2142 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:22:24.441499 kubelet[2142]: I0904 17:22:24.441401 2142 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:22:24.441499 kubelet[2142]: E0904 17:22:24.441457 2142 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:22:24.443632 kubelet[2142]: W0904 17:22:24.443592 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.443698 kubelet[2142]: E0904 17:22:24.443646 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:24.447892 kubelet[2142]: I0904 17:22:24.447841 2142 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:22:24.447989 kubelet[2142]: I0904 17:22:24.447977 2142 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:22:24.448050 kubelet[2142]: I0904 17:22:24.448042 2142 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:22:24.510764 kubelet[2142]: I0904 17:22:24.510728 2142 policy_none.go:49] "None policy: Start" Sep 4 17:22:24.511697 kubelet[2142]: I0904 17:22:24.511637 2142 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:22:24.511697 kubelet[2142]: I0904 17:22:24.511668 2142 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:22:24.516490 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:22:24.528435 kubelet[2142]: I0904 17:22:24.527894 2142 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:22:24.528435 kubelet[2142]: E0904 17:22:24.528274 2142 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Sep 4 17:22:24.532386 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:22:24.535501 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:22:24.541593 kubelet[2142]: E0904 17:22:24.541551 2142 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:22:24.546280 kubelet[2142]: I0904 17:22:24.546146 2142 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:22:24.546938 kubelet[2142]: I0904 17:22:24.546430 2142 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:22:24.547312 kubelet[2142]: E0904 17:22:24.547201 2142 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:22:24.628341 kubelet[2142]: E0904 17:22:24.628223 2142 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Sep 4 17:22:24.729582 kubelet[2142]: I0904 17:22:24.729530 2142 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:22:24.730140 kubelet[2142]: E0904 17:22:24.730115 2142 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Sep 4 17:22:24.744023 kubelet[2142]: I0904 17:22:24.743987 2142 topology_manager.go:215] "Topology Admit Handler" podUID="90bcf8c8d805bb4621caceac32970223" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:22:24.748632 kubelet[2142]: I0904 17:22:24.748457 2142 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:22:24.749567 kubelet[2142]: I0904 17:22:24.749460 2142 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:22:24.755634 systemd[1]: Created slice kubepods-burstable-pod90bcf8c8d805bb4621caceac32970223.slice - libcontainer container kubepods-burstable-pod90bcf8c8d805bb4621caceac32970223.slice. Sep 4 17:22:24.787323 systemd[1]: Created slice kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice - libcontainer container kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice. Sep 4 17:22:24.805296 systemd[1]: Created slice kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice - libcontainer container kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice. Sep 4 17:22:24.829303 kubelet[2142]: I0904 17:22:24.829262 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90bcf8c8d805bb4621caceac32970223-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90bcf8c8d805bb4621caceac32970223\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:24.829303 kubelet[2142]: I0904 17:22:24.829312 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90bcf8c8d805bb4621caceac32970223-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90bcf8c8d805bb4621caceac32970223\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:24.829440 kubelet[2142]: I0904 17:22:24.829336 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:24.829440 kubelet[2142]: I0904 17:22:24.829364 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:24.829440 kubelet[2142]: I0904 17:22:24.829384 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:24.829440 kubelet[2142]: I0904 17:22:24.829408 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:22:24.829440 kubelet[2142]: I0904 17:22:24.829426 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90bcf8c8d805bb4621caceac32970223-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90bcf8c8d805bb4621caceac32970223\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:24.829573 kubelet[2142]: I0904 17:22:24.829443 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:24.829573 kubelet[2142]: I0904 17:22:24.829462 2142 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:25.029711 kubelet[2142]: E0904 17:22:25.029667 2142 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Sep 4 17:22:25.088020 kubelet[2142]: E0904 17:22:25.087966 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:25.088674 containerd[1435]: time="2024-09-04T17:22:25.088630825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90bcf8c8d805bb4621caceac32970223,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:25.102939 kubelet[2142]: E0904 17:22:25.102908 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:25.103379 containerd[1435]: time="2024-09-04T17:22:25.103338756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:25.107937 kubelet[2142]: E0904 17:22:25.107894 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:25.108349 containerd[1435]: time="2024-09-04T17:22:25.108264468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:25.132146 kubelet[2142]: I0904 17:22:25.132111 2142 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:22:25.132400 kubelet[2142]: E0904 17:22:25.132386 2142 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Sep 4 17:22:25.553607 kubelet[2142]: W0904 17:22:25.553570 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:25.553607 kubelet[2142]: E0904 17:22:25.553610 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:25.566426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870032759.mount: Deactivated successfully. Sep 4 17:22:25.570654 containerd[1435]: time="2024-09-04T17:22:25.570613045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:25.572609 containerd[1435]: time="2024-09-04T17:22:25.572561134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:22:25.573144 containerd[1435]: time="2024-09-04T17:22:25.573115415Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:25.574687 containerd[1435]: time="2024-09-04T17:22:25.574632100Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:25.575628 containerd[1435]: time="2024-09-04T17:22:25.575595023Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:25.576066 containerd[1435]: time="2024-09-04T17:22:25.576032857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:22:25.576528 containerd[1435]: time="2024-09-04T17:22:25.576498326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:22:25.579204 containerd[1435]: time="2024-09-04T17:22:25.579162308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:25.581197 containerd[1435]: time="2024-09-04T17:22:25.581160555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.831833ms" Sep 4 17:22:25.582051 containerd[1435]: time="2024-09-04T17:22:25.581923090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.208325ms" Sep 4 17:22:25.583060 containerd[1435]: time="2024-09-04T17:22:25.583030015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.614306ms" Sep 4 17:22:25.602956 kubelet[2142]: W0904 17:22:25.602900 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:25.603293 kubelet[2142]: E0904 17:22:25.603268 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:25.720814 containerd[1435]: time="2024-09-04T17:22:25.720630566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:25.720814 containerd[1435]: time="2024-09-04T17:22:25.720677049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:25.720814 containerd[1435]: time="2024-09-04T17:22:25.720688311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:25.720814 containerd[1435]: time="2024-09-04T17:22:25.720767579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:25.721182 containerd[1435]: time="2024-09-04T17:22:25.721051429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:25.721182 containerd[1435]: time="2024-09-04T17:22:25.721123709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:25.721182 containerd[1435]: time="2024-09-04T17:22:25.721136408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:25.721790 containerd[1435]: time="2024-09-04T17:22:25.721742523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:25.721949 containerd[1435]: time="2024-09-04T17:22:25.721894910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:25.721980 containerd[1435]: time="2024-09-04T17:22:25.721956128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:25.722020 containerd[1435]: time="2024-09-04T17:22:25.721978611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:25.724729 containerd[1435]: time="2024-09-04T17:22:25.724628138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:25.740642 systemd[1]: Started cri-containerd-4c83bfa751b5e81be100d94787fd81497a5571d0dffa6a55e8ab5334c612b653.scope - libcontainer container 4c83bfa751b5e81be100d94787fd81497a5571d0dffa6a55e8ab5334c612b653. Sep 4 17:22:25.741653 systemd[1]: Started cri-containerd-dba94ca33625775b0b6591e9faeab9ceed172457f49ca9fb10fc782a59c66a26.scope - libcontainer container dba94ca33625775b0b6591e9faeab9ceed172457f49ca9fb10fc782a59c66a26. Sep 4 17:22:25.745049 systemd[1]: Started cri-containerd-e091c1d22ed7f949e8fc3808f47f2c97cdd9af3e54c3aa18dcdb92dab30eb5e7.scope - libcontainer container e091c1d22ed7f949e8fc3808f47f2c97cdd9af3e54c3aa18dcdb92dab30eb5e7. Sep 4 17:22:25.779047 containerd[1435]: time="2024-09-04T17:22:25.778892796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90bcf8c8d805bb4621caceac32970223,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c83bfa751b5e81be100d94787fd81497a5571d0dffa6a55e8ab5334c612b653\"" Sep 4 17:22:25.780254 kubelet[2142]: E0904 17:22:25.780231 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:25.783022 containerd[1435]: time="2024-09-04T17:22:25.782914288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e091c1d22ed7f949e8fc3808f47f2c97cdd9af3e54c3aa18dcdb92dab30eb5e7\"" Sep 4 17:22:25.783857 containerd[1435]: time="2024-09-04T17:22:25.783826056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dba94ca33625775b0b6591e9faeab9ceed172457f49ca9fb10fc782a59c66a26\"" Sep 4 17:22:25.784903 kubelet[2142]: E0904 17:22:25.784869 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:25.784987 kubelet[2142]: E0904 17:22:25.784953 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:25.785031 containerd[1435]: time="2024-09-04T17:22:25.784886737Z" level=info msg="CreateContainer within sandbox \"4c83bfa751b5e81be100d94787fd81497a5571d0dffa6a55e8ab5334c612b653\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:22:25.786578 containerd[1435]: time="2024-09-04T17:22:25.786492115Z" level=info msg="CreateContainer within sandbox \"e091c1d22ed7f949e8fc3808f47f2c97cdd9af3e54c3aa18dcdb92dab30eb5e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:22:25.787408 containerd[1435]: time="2024-09-04T17:22:25.787272661Z" level=info msg="CreateContainer within sandbox \"dba94ca33625775b0b6591e9faeab9ceed172457f49ca9fb10fc782a59c66a26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:22:25.806767 containerd[1435]: time="2024-09-04T17:22:25.805310790Z" level=info msg="CreateContainer within sandbox \"e091c1d22ed7f949e8fc3808f47f2c97cdd9af3e54c3aa18dcdb92dab30eb5e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0eb47290a49ea540451f7690feb908862ca06ea93e61284c0615b889e8af22fd\"" Sep 4 17:22:25.806767 containerd[1435]: time="2024-09-04T17:22:25.806009991Z" level=info msg="CreateContainer within sandbox \"4c83bfa751b5e81be100d94787fd81497a5571d0dffa6a55e8ab5334c612b653\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1acc7b5e78910201233a5d141dac6679b06dea8e5095fb4ec20ddaa753b66e7f\"" Sep 4 17:22:25.806767 containerd[1435]: time="2024-09-04T17:22:25.806465835Z" level=info msg="StartContainer for \"0eb47290a49ea540451f7690feb908862ca06ea93e61284c0615b889e8af22fd\"" Sep 4 17:22:25.806767 containerd[1435]: time="2024-09-04T17:22:25.806552891Z" level=info msg="StartContainer for \"1acc7b5e78910201233a5d141dac6679b06dea8e5095fb4ec20ddaa753b66e7f\"" Sep 4 17:22:25.809377 containerd[1435]: time="2024-09-04T17:22:25.809325174Z" level=info msg="CreateContainer within sandbox \"dba94ca33625775b0b6591e9faeab9ceed172457f49ca9fb10fc782a59c66a26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49f2f245cd6f919a0b44350b80e8659c54ccb8b2f3bb68e0039d51ab5b52424c\"" Sep 4 17:22:25.811037 containerd[1435]: time="2024-09-04T17:22:25.809985319Z" level=info msg="StartContainer for \"49f2f245cd6f919a0b44350b80e8659c54ccb8b2f3bb68e0039d51ab5b52424c\"" Sep 4 17:22:25.830483 kubelet[2142]: E0904 17:22:25.830450 2142 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" Sep 4 17:22:25.832616 systemd[1]: Started cri-containerd-1acc7b5e78910201233a5d141dac6679b06dea8e5095fb4ec20ddaa753b66e7f.scope - libcontainer container 1acc7b5e78910201233a5d141dac6679b06dea8e5095fb4ec20ddaa753b66e7f. Sep 4 17:22:25.836133 systemd[1]: Started cri-containerd-0eb47290a49ea540451f7690feb908862ca06ea93e61284c0615b889e8af22fd.scope - libcontainer container 0eb47290a49ea540451f7690feb908862ca06ea93e61284c0615b889e8af22fd. Sep 4 17:22:25.836994 systemd[1]: Started cri-containerd-49f2f245cd6f919a0b44350b80e8659c54ccb8b2f3bb68e0039d51ab5b52424c.scope - libcontainer container 49f2f245cd6f919a0b44350b80e8659c54ccb8b2f3bb68e0039d51ab5b52424c. Sep 4 17:22:25.884785 containerd[1435]: time="2024-09-04T17:22:25.882495802Z" level=info msg="StartContainer for \"1acc7b5e78910201233a5d141dac6679b06dea8e5095fb4ec20ddaa753b66e7f\" returns successfully" Sep 4 17:22:25.884785 containerd[1435]: time="2024-09-04T17:22:25.882627065Z" level=info msg="StartContainer for \"0eb47290a49ea540451f7690feb908862ca06ea93e61284c0615b889e8af22fd\" returns successfully" Sep 4 17:22:25.884785 containerd[1435]: time="2024-09-04T17:22:25.882653940Z" level=info msg="StartContainer for \"49f2f245cd6f919a0b44350b80e8659c54ccb8b2f3bb68e0039d51ab5b52424c\" returns successfully" Sep 4 17:22:25.926370 kubelet[2142]: W0904 17:22:25.926041 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:25.926370 kubelet[2142]: E0904 17:22:25.926105 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:25.939634 kubelet[2142]: I0904 17:22:25.934147 2142 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:22:25.939634 kubelet[2142]: E0904 17:22:25.934429 2142 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Sep 4 17:22:26.000732 kubelet[2142]: W0904 17:22:25.996360 2142 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:26.000732 kubelet[2142]: E0904 17:22:25.996415 2142 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Sep 4 17:22:26.450285 kubelet[2142]: E0904 17:22:26.449671 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:26.451058 kubelet[2142]: E0904 17:22:26.451036 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:26.454451 kubelet[2142]: E0904 17:22:26.454433 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:27.455912 kubelet[2142]: E0904 17:22:27.455873 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:27.499631 kubelet[2142]: E0904 17:22:27.499586 2142 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:22:27.536412 kubelet[2142]: I0904 17:22:27.536381 2142 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:22:27.545217 kubelet[2142]: I0904 17:22:27.545175 2142 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:22:27.552251 kubelet[2142]: E0904 17:22:27.552197 2142 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:22:27.652460 kubelet[2142]: E0904 17:22:27.652399 2142 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:22:27.753094 kubelet[2142]: E0904 17:22:27.753039 2142 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:22:28.418791 kubelet[2142]: I0904 17:22:28.418742 2142 apiserver.go:52] "Watching apiserver" Sep 4 17:22:28.427884 kubelet[2142]: I0904 17:22:28.427853 2142 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:22:28.461263 kubelet[2142]: E0904 17:22:28.461218 2142 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:28.461810 kubelet[2142]: E0904 17:22:28.461783 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:30.038774 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Sep 4 17:22:30.038793 systemd[1]: Reloading... Sep 4 17:22:30.096620 zram_generator::config[2458]: No configuration found. Sep 4 17:22:30.196906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:22:30.264501 systemd[1]: Reloading finished in 225 ms. Sep 4 17:22:30.303588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:30.312958 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:22:30.313255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:30.313313 systemd[1]: kubelet.service: Consumed 1.856s CPU time, 117.0M memory peak, 0B memory swap peak. Sep 4 17:22:30.323946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:30.433063 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:22:30.435731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:30.485089 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:22:30.485089 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:22:30.485089 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:22:30.485089 kubelet[2499]: I0904 17:22:30.484893 2499 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:22:30.489645 kubelet[2499]: I0904 17:22:30.489522 2499 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:22:30.489645 kubelet[2499]: I0904 17:22:30.489548 2499 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:22:30.489786 kubelet[2499]: I0904 17:22:30.489737 2499 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:22:30.493170 kubelet[2499]: I0904 17:22:30.493140 2499 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:22:30.494445 kubelet[2499]: I0904 17:22:30.494375 2499 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:22:30.499711 kubelet[2499]: W0904 17:22:30.499690 2499 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:22:30.500512 kubelet[2499]: I0904 17:22:30.500457 2499 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:22:30.500728 kubelet[2499]: I0904 17:22:30.500696 2499 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:22:30.500955 kubelet[2499]: I0904 17:22:30.500864 2499 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:22:30.500955 kubelet[2499]: I0904 17:22:30.500899 2499 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:22:30.500955 kubelet[2499]: I0904 17:22:30.500909 2499 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:22:30.500955 kubelet[2499]: I0904 17:22:30.500943 2499 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:22:30.501899 kubelet[2499]: I0904 17:22:30.501036 2499 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:22:30.501899 kubelet[2499]: I0904 17:22:30.501050 2499 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:22:30.501899 kubelet[2499]: I0904 17:22:30.501074 2499 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:22:30.501899 kubelet[2499]: I0904 17:22:30.501088 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:22:30.501899 kubelet[2499]: I0904 17:22:30.501831 2499 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:22:30.502585 kubelet[2499]: I0904 17:22:30.502553 2499 server.go:1232] "Started kubelet" Sep 4 17:22:30.503389 kubelet[2499]: E0904 17:22:30.503348 2499 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:22:30.503389 kubelet[2499]: E0904 17:22:30.503385 2499 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:22:30.503753 kubelet[2499]: I0904 17:22:30.503733 2499 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:22:30.504246 kubelet[2499]: I0904 17:22:30.504057 2499 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:22:30.504355 kubelet[2499]: I0904 17:22:30.504064 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:22:30.505467 kubelet[2499]: I0904 17:22:30.505185 2499 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:22:30.506056 kubelet[2499]: I0904 17:22:30.506022 2499 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:22:30.515426 kubelet[2499]: I0904 17:22:30.513565 2499 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:22:30.515426 kubelet[2499]: I0904 17:22:30.513976 2499 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:22:30.515426 kubelet[2499]: I0904 17:22:30.514258 2499 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:22:30.535255 kubelet[2499]: I0904 17:22:30.534826 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:22:30.536882 sudo[2522]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:22:30.537161 sudo[2522]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 17:22:30.544467 kubelet[2499]: I0904 17:22:30.543547 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:22:30.544467 kubelet[2499]: I0904 17:22:30.543593 2499 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:22:30.544467 kubelet[2499]: I0904 17:22:30.543610 2499 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:22:30.544467 kubelet[2499]: E0904 17:22:30.543704 2499 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:22:30.590610 kubelet[2499]: I0904 17:22:30.590508 2499 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:22:30.590610 kubelet[2499]: I0904 17:22:30.590532 2499 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:22:30.590610 kubelet[2499]: I0904 17:22:30.590551 2499 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:22:30.590761 kubelet[2499]: I0904 17:22:30.590695 2499 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:22:30.590761 kubelet[2499]: I0904 17:22:30.590715 2499 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:22:30.590761 kubelet[2499]: I0904 17:22:30.590722 2499 policy_none.go:49] "None policy: Start" Sep 4 17:22:30.592382 kubelet[2499]: I0904 17:22:30.592357 2499 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:22:30.592382 kubelet[2499]: I0904 17:22:30.592388 2499 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:22:30.592607 kubelet[2499]: I0904 17:22:30.592591 2499 state_mem.go:75] "Updated machine memory state" Sep 4 17:22:30.596825 kubelet[2499]: I0904 17:22:30.596788 2499 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:22:30.597181 kubelet[2499]: I0904 17:22:30.597025 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:22:30.610365 kubelet[2499]: I0904 17:22:30.610338 2499 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:22:30.618810 kubelet[2499]: I0904 17:22:30.618684 2499 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Sep 4 17:22:30.618810 kubelet[2499]: I0904 17:22:30.618760 2499 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:22:30.644872 kubelet[2499]: I0904 17:22:30.644819 2499 topology_manager.go:215] "Topology Admit Handler" podUID="90bcf8c8d805bb4621caceac32970223" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:22:30.645007 kubelet[2499]: I0904 17:22:30.644948 2499 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:22:30.645007 kubelet[2499]: I0904 17:22:30.644989 2499 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:22:30.715799 kubelet[2499]: I0904 17:22:30.715506 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90bcf8c8d805bb4621caceac32970223-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90bcf8c8d805bb4621caceac32970223\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:30.715799 kubelet[2499]: I0904 17:22:30.715555 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90bcf8c8d805bb4621caceac32970223-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90bcf8c8d805bb4621caceac32970223\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:30.715799 kubelet[2499]: I0904 17:22:30.715590 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:30.715799 kubelet[2499]: I0904 17:22:30.715614 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:30.715799 kubelet[2499]: I0904 17:22:30.715638 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:30.716024 kubelet[2499]: I0904 17:22:30.715657 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:22:30.716024 kubelet[2499]: I0904 17:22:30.715683 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90bcf8c8d805bb4621caceac32970223-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90bcf8c8d805bb4621caceac32970223\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:30.716024 kubelet[2499]: I0904 17:22:30.715731 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:30.716024 kubelet[2499]: I0904 17:22:30.715768 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:22:30.952302 kubelet[2499]: E0904 17:22:30.952185 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:30.952656 kubelet[2499]: E0904 17:22:30.952584 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:30.952829 kubelet[2499]: E0904 17:22:30.952800 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:30.978912 sudo[2522]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:31.502168 kubelet[2499]: I0904 17:22:31.501918 2499 apiserver.go:52] "Watching apiserver" Sep 4 17:22:31.514289 kubelet[2499]: I0904 17:22:31.514233 2499 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:22:31.565604 kubelet[2499]: E0904 17:22:31.565456 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:31.566140 kubelet[2499]: E0904 17:22:31.566083 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:31.571503 kubelet[2499]: E0904 17:22:31.568913 2499 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:22:31.575307 kubelet[2499]: E0904 17:22:31.575267 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:31.590362 kubelet[2499]: I0904 17:22:31.590321 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.590270259 podCreationTimestamp="2024-09-04 17:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:22:31.582947179 +0000 UTC m=+1.145033432" watchObservedRunningTime="2024-09-04 17:22:31.590270259 +0000 UTC m=+1.152356552" Sep 4 17:22:31.598006 kubelet[2499]: I0904 17:22:31.597950 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.597910342 podCreationTimestamp="2024-09-04 17:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:22:31.590596772 +0000 UTC m=+1.152683065" watchObservedRunningTime="2024-09-04 17:22:31.597910342 +0000 UTC m=+1.159996635" Sep 4 17:22:32.564539 kubelet[2499]: E0904 17:22:32.564500 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:32.979491 sudo[1615]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:32.982297 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:32.986435 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:38288.service: Deactivated successfully. Sep 4 17:22:32.990026 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:22:32.990235 systemd[1]: session-7.scope: Consumed 7.403s CPU time, 135.2M memory peak, 0B memory swap peak. Sep 4 17:22:32.991688 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:22:32.992842 systemd-logind[1420]: Removed session 7. Sep 4 17:22:33.730552 kubelet[2499]: E0904 17:22:33.730491 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:35.006232 kubelet[2499]: E0904 17:22:35.006178 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:35.021813 kubelet[2499]: I0904 17:22:35.021751 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.021706173 podCreationTimestamp="2024-09-04 17:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:22:31.597883173 +0000 UTC m=+1.159969466" watchObservedRunningTime="2024-09-04 17:22:35.021706173 +0000 UTC m=+4.583792466" Sep 4 17:22:35.571786 kubelet[2499]: E0904 17:22:35.571736 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:38.404913 kubelet[2499]: E0904 17:22:38.404876 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:38.578786 kubelet[2499]: E0904 17:22:38.578699 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:43.737558 kubelet[2499]: E0904 17:22:43.737518 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:44.119790 update_engine[1424]: I0904 17:22:44.119163 1424 update_attempter.cc:509] Updating boot flags... Sep 4 17:22:44.154659 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2585) Sep 4 17:22:44.187507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2588) Sep 4 17:22:44.215869 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2588) Sep 4 17:22:45.098455 kubelet[2499]: I0904 17:22:45.098415 2499 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:22:45.099090 containerd[1435]: time="2024-09-04T17:22:45.098813784Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:22:45.099632 kubelet[2499]: I0904 17:22:45.099598 2499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:22:45.722626 kubelet[2499]: I0904 17:22:45.722592 2499 topology_manager.go:215] "Topology Admit Handler" podUID="94320546-42ea-4da9-a541-174823b30fcf" podNamespace="kube-system" podName="cilium-7cq2f" Sep 4 17:22:45.730177 kubelet[2499]: I0904 17:22:45.730135 2499 topology_manager.go:215] "Topology Admit Handler" podUID="2ed85f0e-7fc3-4148-9952-469ec3195381" podNamespace="kube-system" podName="kube-proxy-jj5n2" Sep 4 17:22:45.732135 kubelet[2499]: W0904 17:22:45.732076 2499 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 4 17:22:45.732135 kubelet[2499]: E0904 17:22:45.732108 2499 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 4 17:22:45.744728 systemd[1]: Created slice kubepods-burstable-pod94320546_42ea_4da9_a541_174823b30fcf.slice - libcontainer container kubepods-burstable-pod94320546_42ea_4da9_a541_174823b30fcf.slice. Sep 4 17:22:45.750608 systemd[1]: Created slice kubepods-besteffort-pod2ed85f0e_7fc3_4148_9952_469ec3195381.slice - libcontainer container kubepods-besteffort-pod2ed85f0e_7fc3_4148_9952_469ec3195381.slice. Sep 4 17:22:45.830888 kubelet[2499]: I0904 17:22:45.830839 2499 topology_manager.go:215] "Topology Admit Handler" podUID="256c51a5-5b0e-472e-bf3b-7fa93e912fc5" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-hsdkt" Sep 4 17:22:45.842777 systemd[1]: Created slice kubepods-besteffort-pod256c51a5_5b0e_472e_bf3b_7fa93e912fc5.slice - libcontainer container kubepods-besteffort-pod256c51a5_5b0e_472e_bf3b_7fa93e912fc5.slice. Sep 4 17:22:45.909246 kubelet[2499]: I0904 17:22:45.909211 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-hubble-tls\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909615 kubelet[2499]: I0904 17:22:45.909451 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-bpf-maps\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909615 kubelet[2499]: I0904 17:22:45.909510 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-net\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909615 kubelet[2499]: I0904 17:22:45.909535 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ed85f0e-7fc3-4148-9952-469ec3195381-lib-modules\") pod \"kube-proxy-jj5n2\" (UID: \"2ed85f0e-7fc3-4148-9952-469ec3195381\") " pod="kube-system/kube-proxy-jj5n2" Sep 4 17:22:45.909615 kubelet[2499]: I0904 17:22:45.909555 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cni-path\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909615 kubelet[2499]: I0904 17:22:45.909596 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ed85f0e-7fc3-4148-9952-469ec3195381-kube-proxy\") pod \"kube-proxy-jj5n2\" (UID: \"2ed85f0e-7fc3-4148-9952-469ec3195381\") " pod="kube-system/kube-proxy-jj5n2" Sep 4 17:22:45.909920 kubelet[2499]: I0904 17:22:45.909645 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-hostproc\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909920 kubelet[2499]: I0904 17:22:45.909695 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-etc-cni-netd\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909920 kubelet[2499]: I0904 17:22:45.909734 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-lib-modules\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909920 kubelet[2499]: I0904 17:22:45.909755 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-xtables-lock\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909920 kubelet[2499]: I0904 17:22:45.909777 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dznjs\" (UniqueName: \"kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-kube-api-access-dznjs\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.909920 kubelet[2499]: I0904 17:22:45.909818 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-kernel\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.910252 kubelet[2499]: I0904 17:22:45.909842 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94320546-42ea-4da9-a541-174823b30fcf-clustermesh-secrets\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.910252 kubelet[2499]: I0904 17:22:45.909997 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94320546-42ea-4da9-a541-174823b30fcf-cilium-config-path\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.910252 kubelet[2499]: I0904 17:22:45.910023 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-cgroup\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:45.910252 kubelet[2499]: I0904 17:22:45.910045 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ed85f0e-7fc3-4148-9952-469ec3195381-xtables-lock\") pod \"kube-proxy-jj5n2\" (UID: \"2ed85f0e-7fc3-4148-9952-469ec3195381\") " pod="kube-system/kube-proxy-jj5n2" Sep 4 17:22:45.910252 kubelet[2499]: I0904 17:22:45.910072 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfm5q\" (UniqueName: \"kubernetes.io/projected/2ed85f0e-7fc3-4148-9952-469ec3195381-kube-api-access-gfm5q\") pod \"kube-proxy-jj5n2\" (UID: \"2ed85f0e-7fc3-4148-9952-469ec3195381\") " pod="kube-system/kube-proxy-jj5n2" Sep 4 17:22:45.910390 kubelet[2499]: I0904 17:22:45.910092 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-run\") pod \"cilium-7cq2f\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " pod="kube-system/cilium-7cq2f" Sep 4 17:22:46.011010 kubelet[2499]: I0904 17:22:46.010469 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-hsdkt\" (UID: \"256c51a5-5b0e-472e-bf3b-7fa93e912fc5\") " pod="kube-system/cilium-operator-6bc8ccdb58-hsdkt" Sep 4 17:22:46.011010 kubelet[2499]: I0904 17:22:46.010821 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspjr\" (UniqueName: \"kubernetes.io/projected/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-kube-api-access-qspjr\") pod \"cilium-operator-6bc8ccdb58-hsdkt\" (UID: \"256c51a5-5b0e-472e-bf3b-7fa93e912fc5\") " pod="kube-system/cilium-operator-6bc8ccdb58-hsdkt" Sep 4 17:22:46.047965 kubelet[2499]: E0904 17:22:46.047936 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:46.049285 containerd[1435]: time="2024-09-04T17:22:46.049246401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7cq2f,Uid:94320546-42ea-4da9-a541-174823b30fcf,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:46.073273 containerd[1435]: time="2024-09-04T17:22:46.073079022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:46.073273 containerd[1435]: time="2024-09-04T17:22:46.073131640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:46.073273 containerd[1435]: time="2024-09-04T17:22:46.073143555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:46.073273 containerd[1435]: time="2024-09-04T17:22:46.073228159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:46.093674 systemd[1]: Started cri-containerd-d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472.scope - libcontainer container d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472. Sep 4 17:22:46.113509 containerd[1435]: time="2024-09-04T17:22:46.111910279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7cq2f,Uid:94320546-42ea-4da9-a541-174823b30fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\"" Sep 4 17:22:46.113861 kubelet[2499]: E0904 17:22:46.113014 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:46.118508 containerd[1435]: time="2024-09-04T17:22:46.114211936Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:22:46.148107 kubelet[2499]: E0904 17:22:46.148057 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:46.148767 containerd[1435]: time="2024-09-04T17:22:46.148726676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-hsdkt,Uid:256c51a5-5b0e-472e-bf3b-7fa93e912fc5,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:46.180564 containerd[1435]: time="2024-09-04T17:22:46.180420540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:46.180564 containerd[1435]: time="2024-09-04T17:22:46.180511381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:46.180564 containerd[1435]: time="2024-09-04T17:22:46.180525575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:46.180871 containerd[1435]: time="2024-09-04T17:22:46.180613098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:46.211643 systemd[1]: Started cri-containerd-090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff.scope - libcontainer container 090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff. Sep 4 17:22:46.247689 containerd[1435]: time="2024-09-04T17:22:46.247647150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-hsdkt,Uid:256c51a5-5b0e-472e-bf3b-7fa93e912fc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff\"" Sep 4 17:22:46.248923 kubelet[2499]: E0904 17:22:46.248711 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:46.964166 kubelet[2499]: E0904 17:22:46.964045 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:46.964762 containerd[1435]: time="2024-09-04T17:22:46.964716112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj5n2,Uid:2ed85f0e-7fc3-4148-9952-469ec3195381,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:46.987246 containerd[1435]: time="2024-09-04T17:22:46.987156688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:46.987246 containerd[1435]: time="2024-09-04T17:22:46.987215263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:46.988133 containerd[1435]: time="2024-09-04T17:22:46.987618491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:46.988133 containerd[1435]: time="2024-09-04T17:22:46.988081813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:47.021723 systemd[1]: Started cri-containerd-a8767771eeadec478a12adffdbcb9658d4d420999751a34e4d2446f694489db6.scope - libcontainer container a8767771eeadec478a12adffdbcb9658d4d420999751a34e4d2446f694489db6. Sep 4 17:22:47.062976 containerd[1435]: time="2024-09-04T17:22:47.062919253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj5n2,Uid:2ed85f0e-7fc3-4148-9952-469ec3195381,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8767771eeadec478a12adffdbcb9658d4d420999751a34e4d2446f694489db6\"" Sep 4 17:22:47.064395 kubelet[2499]: E0904 17:22:47.064375 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:47.066940 containerd[1435]: time="2024-09-04T17:22:47.066900339Z" level=info msg="CreateContainer within sandbox \"a8767771eeadec478a12adffdbcb9658d4d420999751a34e4d2446f694489db6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:22:47.092745 containerd[1435]: time="2024-09-04T17:22:47.092617243Z" level=info msg="CreateContainer within sandbox \"a8767771eeadec478a12adffdbcb9658d4d420999751a34e4d2446f694489db6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1490bb2f85b3a88f977ed438fd00288abf29a16a4cbedce2512210d88db9aeb7\"" Sep 4 17:22:47.094602 containerd[1435]: time="2024-09-04T17:22:47.093185735Z" level=info msg="StartContainer for \"1490bb2f85b3a88f977ed438fd00288abf29a16a4cbedce2512210d88db9aeb7\"" Sep 4 17:22:47.128736 systemd[1]: Started cri-containerd-1490bb2f85b3a88f977ed438fd00288abf29a16a4cbedce2512210d88db9aeb7.scope - libcontainer container 1490bb2f85b3a88f977ed438fd00288abf29a16a4cbedce2512210d88db9aeb7. Sep 4 17:22:47.157620 containerd[1435]: time="2024-09-04T17:22:47.157575275Z" level=info msg="StartContainer for \"1490bb2f85b3a88f977ed438fd00288abf29a16a4cbedce2512210d88db9aeb7\" returns successfully" Sep 4 17:22:47.596853 kubelet[2499]: E0904 17:22:47.596805 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:50.608632 kubelet[2499]: I0904 17:22:50.608594 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jj5n2" podStartSLOduration=5.608553747 podCreationTimestamp="2024-09-04 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:22:47.607433367 +0000 UTC m=+17.169519660" watchObservedRunningTime="2024-09-04 17:22:50.608553747 +0000 UTC m=+20.170640000" Sep 4 17:22:51.881246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850953094.mount: Deactivated successfully. Sep 4 17:22:53.324355 containerd[1435]: time="2024-09-04T17:22:53.324300719Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:53.325713 containerd[1435]: time="2024-09-04T17:22:53.325502672Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651534" Sep 4 17:22:53.326439 containerd[1435]: time="2024-09-04T17:22:53.326390831Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:53.329074 containerd[1435]: time="2024-09-04T17:22:53.329034992Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.214785911s" Sep 4 17:22:53.329140 containerd[1435]: time="2024-09-04T17:22:53.329077461Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 17:22:53.331029 containerd[1435]: time="2024-09-04T17:22:53.330996819Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:22:53.334462 containerd[1435]: time="2024-09-04T17:22:53.334343909Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:22:53.367005 containerd[1435]: time="2024-09-04T17:22:53.366942809Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\"" Sep 4 17:22:53.367463 containerd[1435]: time="2024-09-04T17:22:53.367433476Z" level=info msg="StartContainer for \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\"" Sep 4 17:22:53.398735 systemd[1]: Started cri-containerd-4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44.scope - libcontainer container 4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44. Sep 4 17:22:53.428308 containerd[1435]: time="2024-09-04T17:22:53.428245628Z" level=info msg="StartContainer for \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\" returns successfully" Sep 4 17:22:53.471587 systemd[1]: cri-containerd-4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44.scope: Deactivated successfully. Sep 4 17:22:53.635024 containerd[1435]: time="2024-09-04T17:22:53.634062250Z" level=info msg="shim disconnected" id=4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44 namespace=k8s.io Sep 4 17:22:53.635024 containerd[1435]: time="2024-09-04T17:22:53.634115595Z" level=warning msg="cleaning up after shim disconnected" id=4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44 namespace=k8s.io Sep 4 17:22:53.635024 containerd[1435]: time="2024-09-04T17:22:53.634124073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:22:53.635672 kubelet[2499]: E0904 17:22:53.634696 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:54.364014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44-rootfs.mount: Deactivated successfully. Sep 4 17:22:54.631373 kubelet[2499]: E0904 17:22:54.631250 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:54.654493 containerd[1435]: time="2024-09-04T17:22:54.654402708Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:22:54.677671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716540759.mount: Deactivated successfully. Sep 4 17:22:54.678036 containerd[1435]: time="2024-09-04T17:22:54.677999815Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\"" Sep 4 17:22:54.678731 containerd[1435]: time="2024-09-04T17:22:54.678690839Z" level=info msg="StartContainer for \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\"" Sep 4 17:22:54.719704 systemd[1]: Started cri-containerd-9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c.scope - libcontainer container 9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c. Sep 4 17:22:54.754273 containerd[1435]: time="2024-09-04T17:22:54.754227993Z" level=info msg="StartContainer for \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\" returns successfully" Sep 4 17:22:54.786256 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:22:54.786491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:22:54.786569 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:22:54.794844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:22:54.795028 systemd[1]: cri-containerd-9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c.scope: Deactivated successfully. Sep 4 17:22:54.836181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:22:54.839328 containerd[1435]: time="2024-09-04T17:22:54.839272604Z" level=info msg="shim disconnected" id=9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c namespace=k8s.io Sep 4 17:22:54.839840 containerd[1435]: time="2024-09-04T17:22:54.839703374Z" level=warning msg="cleaning up after shim disconnected" id=9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c namespace=k8s.io Sep 4 17:22:54.839840 containerd[1435]: time="2024-09-04T17:22:54.839724368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:22:55.045997 containerd[1435]: time="2024-09-04T17:22:55.045937030Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:55.048428 containerd[1435]: time="2024-09-04T17:22:55.048391644Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138350" Sep 4 17:22:55.049611 containerd[1435]: time="2024-09-04T17:22:55.049570002Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:55.050970 containerd[1435]: time="2024-09-04T17:22:55.050933477Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.719900548s" Sep 4 17:22:55.051020 containerd[1435]: time="2024-09-04T17:22:55.050973427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 17:22:55.054843 containerd[1435]: time="2024-09-04T17:22:55.054809551Z" level=info msg="CreateContainer within sandbox \"090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:22:55.078160 containerd[1435]: time="2024-09-04T17:22:55.078090150Z" level=info msg="CreateContainer within sandbox \"090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\"" Sep 4 17:22:55.079745 containerd[1435]: time="2024-09-04T17:22:55.078826454Z" level=info msg="StartContainer for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\"" Sep 4 17:22:55.117724 systemd[1]: Started cri-containerd-417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42.scope - libcontainer container 417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42. Sep 4 17:22:55.147329 containerd[1435]: time="2024-09-04T17:22:55.147274104Z" level=info msg="StartContainer for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" returns successfully" Sep 4 17:22:55.366764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c-rootfs.mount: Deactivated successfully. Sep 4 17:22:55.639194 kubelet[2499]: E0904 17:22:55.639086 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:55.640845 kubelet[2499]: E0904 17:22:55.640581 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:55.642376 containerd[1435]: time="2024-09-04T17:22:55.642333210Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:22:55.648726 kubelet[2499]: I0904 17:22:55.648692 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-hsdkt" podStartSLOduration=1.8463564890000002 podCreationTimestamp="2024-09-04 17:22:45 +0000 UTC" firstStartedPulling="2024-09-04 17:22:46.249179895 +0000 UTC m=+15.811266188" lastFinishedPulling="2024-09-04 17:22:55.051467629 +0000 UTC m=+24.613553922" observedRunningTime="2024-09-04 17:22:55.646335774 +0000 UTC m=+25.208422027" watchObservedRunningTime="2024-09-04 17:22:55.648644223 +0000 UTC m=+25.210730516" Sep 4 17:22:55.682162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479837765.mount: Deactivated successfully. Sep 4 17:22:55.756832 containerd[1435]: time="2024-09-04T17:22:55.756775794Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\"" Sep 4 17:22:55.757888 containerd[1435]: time="2024-09-04T17:22:55.757813306Z" level=info msg="StartContainer for \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\"" Sep 4 17:22:55.786678 systemd[1]: Started cri-containerd-fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f.scope - libcontainer container fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f. Sep 4 17:22:55.816715 containerd[1435]: time="2024-09-04T17:22:55.816665608Z" level=info msg="StartContainer for \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\" returns successfully" Sep 4 17:22:55.829713 systemd[1]: cri-containerd-fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f.scope: Deactivated successfully. Sep 4 17:22:55.854824 containerd[1435]: time="2024-09-04T17:22:55.854751870Z" level=info msg="shim disconnected" id=fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f namespace=k8s.io Sep 4 17:22:55.854824 containerd[1435]: time="2024-09-04T17:22:55.854809976Z" level=warning msg="cleaning up after shim disconnected" id=fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f namespace=k8s.io Sep 4 17:22:55.854824 containerd[1435]: time="2024-09-04T17:22:55.854823853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:22:56.363959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f-rootfs.mount: Deactivated successfully. Sep 4 17:22:56.644123 kubelet[2499]: E0904 17:22:56.644006 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:56.644575 kubelet[2499]: E0904 17:22:56.644298 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:56.646830 containerd[1435]: time="2024-09-04T17:22:56.646717905Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:22:56.706124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764481154.mount: Deactivated successfully. Sep 4 17:22:56.710717 containerd[1435]: time="2024-09-04T17:22:56.710663105Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\"" Sep 4 17:22:56.712407 containerd[1435]: time="2024-09-04T17:22:56.711320078Z" level=info msg="StartContainer for \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\"" Sep 4 17:22:56.750665 systemd[1]: Started cri-containerd-457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d.scope - libcontainer container 457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d. Sep 4 17:22:56.775800 systemd[1]: cri-containerd-457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d.scope: Deactivated successfully. Sep 4 17:22:56.787689 containerd[1435]: time="2024-09-04T17:22:56.785644195Z" level=info msg="StartContainer for \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\" returns successfully" Sep 4 17:22:56.799273 containerd[1435]: time="2024-09-04T17:22:56.799218955Z" level=info msg="shim disconnected" id=457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d namespace=k8s.io Sep 4 17:22:56.799468 containerd[1435]: time="2024-09-04T17:22:56.799449943Z" level=warning msg="cleaning up after shim disconnected" id=457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d namespace=k8s.io Sep 4 17:22:56.799573 containerd[1435]: time="2024-09-04T17:22:56.799556919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:22:57.364054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d-rootfs.mount: Deactivated successfully. Sep 4 17:22:57.649007 kubelet[2499]: E0904 17:22:57.648903 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:57.653548 containerd[1435]: time="2024-09-04T17:22:57.653509856Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:22:57.678939 containerd[1435]: time="2024-09-04T17:22:57.678867293Z" level=info msg="CreateContainer within sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\"" Sep 4 17:22:57.679420 containerd[1435]: time="2024-09-04T17:22:57.679393143Z" level=info msg="StartContainer for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\"" Sep 4 17:22:57.708700 systemd[1]: Started cri-containerd-6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033.scope - libcontainer container 6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033. Sep 4 17:22:57.743941 containerd[1435]: time="2024-09-04T17:22:57.743871566Z" level=info msg="StartContainer for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" returns successfully" Sep 4 17:22:57.904344 kubelet[2499]: I0904 17:22:57.904224 2499 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:22:57.948496 kubelet[2499]: I0904 17:22:57.948352 2499 topology_manager.go:215] "Topology Admit Handler" podUID="49c61f81-2e0e-4138-a7d5-bb0f666abe7d" podNamespace="kube-system" podName="coredns-5dd5756b68-pdlfc" Sep 4 17:22:57.948636 kubelet[2499]: I0904 17:22:57.948621 2499 topology_manager.go:215] "Topology Admit Handler" podUID="5642039f-1bab-4e5a-a7bb-d111fb1a127c" podNamespace="kube-system" podName="coredns-5dd5756b68-sxqgn" Sep 4 17:22:57.968436 systemd[1]: Created slice kubepods-burstable-pod5642039f_1bab_4e5a_a7bb_d111fb1a127c.slice - libcontainer container kubepods-burstable-pod5642039f_1bab_4e5a_a7bb_d111fb1a127c.slice. Sep 4 17:22:57.975514 systemd[1]: Created slice kubepods-burstable-pod49c61f81_2e0e_4138_a7d5_bb0f666abe7d.slice - libcontainer container kubepods-burstable-pod49c61f81_2e0e_4138_a7d5_bb0f666abe7d.slice. Sep 4 17:22:58.097801 kubelet[2499]: I0904 17:22:58.097749 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjj7\" (UniqueName: \"kubernetes.io/projected/5642039f-1bab-4e5a-a7bb-d111fb1a127c-kube-api-access-xnjj7\") pod \"coredns-5dd5756b68-sxqgn\" (UID: \"5642039f-1bab-4e5a-a7bb-d111fb1a127c\") " pod="kube-system/coredns-5dd5756b68-sxqgn" Sep 4 17:22:58.097801 kubelet[2499]: I0904 17:22:58.097808 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49c61f81-2e0e-4138-a7d5-bb0f666abe7d-config-volume\") pod \"coredns-5dd5756b68-pdlfc\" (UID: \"49c61f81-2e0e-4138-a7d5-bb0f666abe7d\") " pod="kube-system/coredns-5dd5756b68-pdlfc" Sep 4 17:22:58.097959 kubelet[2499]: I0904 17:22:58.097836 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbchb\" (UniqueName: \"kubernetes.io/projected/49c61f81-2e0e-4138-a7d5-bb0f666abe7d-kube-api-access-hbchb\") pod \"coredns-5dd5756b68-pdlfc\" (UID: \"49c61f81-2e0e-4138-a7d5-bb0f666abe7d\") " pod="kube-system/coredns-5dd5756b68-pdlfc" Sep 4 17:22:58.097959 kubelet[2499]: I0904 17:22:58.097855 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5642039f-1bab-4e5a-a7bb-d111fb1a127c-config-volume\") pod \"coredns-5dd5756b68-sxqgn\" (UID: \"5642039f-1bab-4e5a-a7bb-d111fb1a127c\") " pod="kube-system/coredns-5dd5756b68-sxqgn" Sep 4 17:22:58.272710 kubelet[2499]: E0904 17:22:58.272675 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:58.274163 containerd[1435]: time="2024-09-04T17:22:58.273803332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sxqgn,Uid:5642039f-1bab-4e5a-a7bb-d111fb1a127c,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:58.279217 kubelet[2499]: E0904 17:22:58.279191 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:58.280209 containerd[1435]: time="2024-09-04T17:22:58.279622826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pdlfc,Uid:49c61f81-2e0e-4138-a7d5-bb0f666abe7d,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:58.655018 kubelet[2499]: E0904 17:22:58.654911 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:22:58.672079 kubelet[2499]: I0904 17:22:58.671728 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7cq2f" podStartSLOduration=6.455242525 podCreationTimestamp="2024-09-04 17:22:45 +0000 UTC" firstStartedPulling="2024-09-04 17:22:46.113709111 +0000 UTC m=+15.675795403" lastFinishedPulling="2024-09-04 17:22:53.330153728 +0000 UTC m=+22.892240101" observedRunningTime="2024-09-04 17:22:58.670870384 +0000 UTC m=+28.232956677" watchObservedRunningTime="2024-09-04 17:22:58.671687223 +0000 UTC m=+28.233773516" Sep 4 17:22:59.657488 kubelet[2499]: E0904 17:22:59.657455 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:00.024134 systemd-networkd[1376]: cilium_host: Link UP Sep 4 17:23:00.025751 systemd-networkd[1376]: cilium_net: Link UP Sep 4 17:23:00.026041 systemd-networkd[1376]: cilium_net: Gained carrier Sep 4 17:23:00.026259 systemd-networkd[1376]: cilium_host: Gained carrier Sep 4 17:23:00.129840 systemd-networkd[1376]: cilium_vxlan: Link UP Sep 4 17:23:00.129848 systemd-networkd[1376]: cilium_vxlan: Gained carrier Sep 4 17:23:00.331617 systemd-networkd[1376]: cilium_host: Gained IPv6LL Sep 4 17:23:00.485509 kernel: NET: Registered PF_ALG protocol family Sep 4 17:23:00.658397 kubelet[2499]: E0904 17:23:00.658218 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:00.904282 systemd-networkd[1376]: cilium_net: Gained IPv6LL Sep 4 17:23:01.141814 systemd-networkd[1376]: lxc_health: Link UP Sep 4 17:23:01.151230 systemd-networkd[1376]: lxc_health: Gained carrier Sep 4 17:23:01.403354 systemd-networkd[1376]: lxce6d1bd31a0be: Link UP Sep 4 17:23:01.410153 systemd-networkd[1376]: lxcac0b62bdb83a: Link UP Sep 4 17:23:01.415665 kernel: eth0: renamed from tmp4b500 Sep 4 17:23:01.427800 kernel: eth0: renamed from tmp7d8ef Sep 4 17:23:01.437220 systemd-networkd[1376]: lxce6d1bd31a0be: Gained carrier Sep 4 17:23:01.437880 systemd-networkd[1376]: lxcac0b62bdb83a: Gained carrier Sep 4 17:23:01.795661 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Sep 4 17:23:02.050439 kubelet[2499]: E0904 17:23:02.050327 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:02.627607 systemd-networkd[1376]: lxce6d1bd31a0be: Gained IPv6LL Sep 4 17:23:02.628553 systemd-networkd[1376]: lxc_health: Gained IPv6LL Sep 4 17:23:02.662808 kubelet[2499]: E0904 17:23:02.662772 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:03.076617 systemd-networkd[1376]: lxcac0b62bdb83a: Gained IPv6LL Sep 4 17:23:03.665399 kubelet[2499]: E0904 17:23:03.665027 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:05.197616 containerd[1435]: time="2024-09-04T17:23:05.197522393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:05.197989 containerd[1435]: time="2024-09-04T17:23:05.197634908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:05.197989 containerd[1435]: time="2024-09-04T17:23:05.197663306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:05.197989 containerd[1435]: time="2024-09-04T17:23:05.197769940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:05.198629 containerd[1435]: time="2024-09-04T17:23:05.198571939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:05.198629 containerd[1435]: time="2024-09-04T17:23:05.198615856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:05.198712 containerd[1435]: time="2024-09-04T17:23:05.198635335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:05.198736 containerd[1435]: time="2024-09-04T17:23:05.198715611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:05.234678 systemd[1]: Started cri-containerd-4b5007b902623041fdcf506f3d94dca19a2cdd3e4572e9662221853dc7472363.scope - libcontainer container 4b5007b902623041fdcf506f3d94dca19a2cdd3e4572e9662221853dc7472363. Sep 4 17:23:05.236050 systemd[1]: Started cri-containerd-7d8efff172abfcac1ecd15fc1ef19f4c6233a72410ba45c742fab97b4accfd79.scope - libcontainer container 7d8efff172abfcac1ecd15fc1ef19f4c6233a72410ba45c742fab97b4accfd79. Sep 4 17:23:05.245492 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:23:05.247486 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:23:05.267427 containerd[1435]: time="2024-09-04T17:23:05.267381701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pdlfc,Uid:49c61f81-2e0e-4138-a7d5-bb0f666abe7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b5007b902623041fdcf506f3d94dca19a2cdd3e4572e9662221853dc7472363\"" Sep 4 17:23:05.268468 containerd[1435]: time="2024-09-04T17:23:05.268403168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sxqgn,Uid:5642039f-1bab-4e5a-a7bb-d111fb1a127c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d8efff172abfcac1ecd15fc1ef19f4c6233a72410ba45c742fab97b4accfd79\"" Sep 4 17:23:05.268647 kubelet[2499]: E0904 17:23:05.268528 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:05.269700 kubelet[2499]: E0904 17:23:05.269433 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:05.272892 containerd[1435]: time="2024-09-04T17:23:05.272859975Z" level=info msg="CreateContainer within sandbox \"7d8efff172abfcac1ecd15fc1ef19f4c6233a72410ba45c742fab97b4accfd79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:23:05.272980 containerd[1435]: time="2024-09-04T17:23:05.272862455Z" level=info msg="CreateContainer within sandbox \"4b5007b902623041fdcf506f3d94dca19a2cdd3e4572e9662221853dc7472363\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:23:05.286528 containerd[1435]: time="2024-09-04T17:23:05.286486302Z" level=info msg="CreateContainer within sandbox \"7d8efff172abfcac1ecd15fc1ef19f4c6233a72410ba45c742fab97b4accfd79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4544ba02521fc38daea0c220cdff0745d785830c3ddfa0bde3ce1adbb933261d\"" Sep 4 17:23:05.287038 containerd[1435]: time="2024-09-04T17:23:05.287011395Z" level=info msg="StartContainer for \"4544ba02521fc38daea0c220cdff0745d785830c3ddfa0bde3ce1adbb933261d\"" Sep 4 17:23:05.290395 containerd[1435]: time="2024-09-04T17:23:05.290359820Z" level=info msg="CreateContainer within sandbox \"4b5007b902623041fdcf506f3d94dca19a2cdd3e4572e9662221853dc7472363\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d5c37e5798249276f8b72a9b5874040681f2c9a9b03fe87cfb615ae7c31896d\"" Sep 4 17:23:05.290884 containerd[1435]: time="2024-09-04T17:23:05.290798717Z" level=info msg="StartContainer for \"2d5c37e5798249276f8b72a9b5874040681f2c9a9b03fe87cfb615ae7c31896d\"" Sep 4 17:23:05.315896 systemd[1]: Started cri-containerd-4544ba02521fc38daea0c220cdff0745d785830c3ddfa0bde3ce1adbb933261d.scope - libcontainer container 4544ba02521fc38daea0c220cdff0745d785830c3ddfa0bde3ce1adbb933261d. Sep 4 17:23:05.318716 systemd[1]: Started cri-containerd-2d5c37e5798249276f8b72a9b5874040681f2c9a9b03fe87cfb615ae7c31896d.scope - libcontainer container 2d5c37e5798249276f8b72a9b5874040681f2c9a9b03fe87cfb615ae7c31896d. Sep 4 17:23:05.367648 containerd[1435]: time="2024-09-04T17:23:05.367598062Z" level=info msg="StartContainer for \"4544ba02521fc38daea0c220cdff0745d785830c3ddfa0bde3ce1adbb933261d\" returns successfully" Sep 4 17:23:05.367932 containerd[1435]: time="2024-09-04T17:23:05.367626420Z" level=info msg="StartContainer for \"2d5c37e5798249276f8b72a9b5874040681f2c9a9b03fe87cfb615ae7c31896d\" returns successfully" Sep 4 17:23:05.672659 kubelet[2499]: E0904 17:23:05.672585 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:05.675823 kubelet[2499]: E0904 17:23:05.675797 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:05.681389 kubelet[2499]: I0904 17:23:05.680723 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pdlfc" podStartSLOduration=20.680689652 podCreationTimestamp="2024-09-04 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:05.680415507 +0000 UTC m=+35.242501800" watchObservedRunningTime="2024-09-04 17:23:05.680689652 +0000 UTC m=+35.242775905" Sep 4 17:23:05.712512 kubelet[2499]: I0904 17:23:05.710819 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sxqgn" podStartSLOduration=20.710780119 podCreationTimestamp="2024-09-04 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:05.698507761 +0000 UTC m=+35.260594054" watchObservedRunningTime="2024-09-04 17:23:05.710780119 +0000 UTC m=+35.272866412" Sep 4 17:23:06.151715 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:49848.service - OpenSSH per-connection server daemon (10.0.0.1:49848). Sep 4 17:23:06.205118 sshd[3897]: Accepted publickey for core from 10.0.0.1 port 49848 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:06.207379 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:06.214331 systemd-logind[1420]: New session 8 of user core. Sep 4 17:23:06.227722 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:23:06.379713 sshd[3897]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:06.382172 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:23:06.382909 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:49848.service: Deactivated successfully. Sep 4 17:23:06.385877 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:23:06.387143 systemd-logind[1420]: Removed session 8. Sep 4 17:23:06.676810 kubelet[2499]: E0904 17:23:06.676771 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:06.677140 kubelet[2499]: E0904 17:23:06.676837 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:07.679042 kubelet[2499]: E0904 17:23:07.678685 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:07.679402 kubelet[2499]: E0904 17:23:07.679089 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:11.394398 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:49858.service - OpenSSH per-connection server daemon (10.0.0.1:49858). Sep 4 17:23:11.429810 sshd[3922]: Accepted publickey for core from 10.0.0.1 port 49858 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:11.430992 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:11.434835 systemd-logind[1420]: New session 9 of user core. Sep 4 17:23:11.444671 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:23:11.552440 sshd[3922]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:11.555096 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:49858.service: Deactivated successfully. Sep 4 17:23:11.557093 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:23:11.558527 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:23:11.559511 systemd-logind[1420]: Removed session 9. Sep 4 17:23:16.565764 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:47032.service - OpenSSH per-connection server daemon (10.0.0.1:47032). Sep 4 17:23:16.605035 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 47032 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:16.606626 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:16.611003 systemd-logind[1420]: New session 10 of user core. Sep 4 17:23:16.621742 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:23:16.734682 sshd[3938]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:16.749202 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:47032.service: Deactivated successfully. Sep 4 17:23:16.752261 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:23:16.754512 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:23:16.766794 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:47044.service - OpenSSH per-connection server daemon (10.0.0.1:47044). Sep 4 17:23:16.768303 systemd-logind[1420]: Removed session 10. Sep 4 17:23:16.801701 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 47044 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:16.803131 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:16.807096 systemd-logind[1420]: New session 11 of user core. Sep 4 17:23:16.818667 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:23:17.524901 sshd[3953]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:17.535143 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:47044.service: Deactivated successfully. Sep 4 17:23:17.540554 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:23:17.543169 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:23:17.551813 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:47056.service - OpenSSH per-connection server daemon (10.0.0.1:47056). Sep 4 17:23:17.552761 systemd-logind[1420]: Removed session 11. Sep 4 17:23:17.585979 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 47056 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:17.587470 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:17.592300 systemd-logind[1420]: New session 12 of user core. Sep 4 17:23:17.599694 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:23:17.720869 sshd[3967]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:17.726533 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:47056.service: Deactivated successfully. Sep 4 17:23:17.728777 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:23:17.729835 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:23:17.730969 systemd-logind[1420]: Removed session 12. Sep 4 17:23:22.732969 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:43830.service - OpenSSH per-connection server daemon (10.0.0.1:43830). Sep 4 17:23:22.768582 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 43830 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:22.770004 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:22.774728 systemd-logind[1420]: New session 13 of user core. Sep 4 17:23:22.786682 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:23:22.899762 sshd[3981]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:22.903282 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:43830.service: Deactivated successfully. Sep 4 17:23:22.904998 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:23:22.906091 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:23:22.906994 systemd-logind[1420]: Removed session 13. Sep 4 17:23:27.913912 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:43832.service - OpenSSH per-connection server daemon (10.0.0.1:43832). Sep 4 17:23:27.953319 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 43832 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:27.954734 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:27.958714 systemd-logind[1420]: New session 14 of user core. Sep 4 17:23:27.972698 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:23:28.084893 sshd[3995]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:28.096306 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:43832.service: Deactivated successfully. Sep 4 17:23:28.098188 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:23:28.099698 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:23:28.112779 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:43846.service - OpenSSH per-connection server daemon (10.0.0.1:43846). Sep 4 17:23:28.115048 systemd-logind[1420]: Removed session 14. Sep 4 17:23:28.149631 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 43846 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:28.150617 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:28.154382 systemd-logind[1420]: New session 15 of user core. Sep 4 17:23:28.161652 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:23:28.410608 sshd[4010]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:28.423225 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:43846.service: Deactivated successfully. Sep 4 17:23:28.424986 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:23:28.426382 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:23:28.427505 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:43858.service - OpenSSH per-connection server daemon (10.0.0.1:43858). Sep 4 17:23:28.430000 systemd-logind[1420]: Removed session 15. Sep 4 17:23:28.490624 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 43858 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:28.492239 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:28.496316 systemd-logind[1420]: New session 16 of user core. Sep 4 17:23:28.505679 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:23:29.355734 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:29.367825 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:43858.service: Deactivated successfully. Sep 4 17:23:29.371915 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:23:29.374012 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:23:29.387002 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). Sep 4 17:23:29.387926 systemd-logind[1420]: Removed session 16. Sep 4 17:23:29.426424 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:29.428246 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:29.433276 systemd-logind[1420]: New session 17 of user core. Sep 4 17:23:29.447748 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:23:29.736147 sshd[4043]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:29.750012 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:43874.service: Deactivated successfully. Sep 4 17:23:29.752430 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:23:29.755221 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:23:29.762858 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:43876.service - OpenSSH per-connection server daemon (10.0.0.1:43876). Sep 4 17:23:29.764967 systemd-logind[1420]: Removed session 17. Sep 4 17:23:29.795607 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 43876 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:29.797113 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:29.800723 systemd-logind[1420]: New session 18 of user core. Sep 4 17:23:29.815650 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:23:29.926008 sshd[4056]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:29.929540 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:43876.service: Deactivated successfully. Sep 4 17:23:29.931686 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:23:29.932449 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:23:29.933453 systemd-logind[1420]: Removed session 18. Sep 4 17:23:34.937309 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:60124.service - OpenSSH per-connection server daemon (10.0.0.1:60124). Sep 4 17:23:34.977693 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 60124 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:34.978122 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:34.982284 systemd-logind[1420]: New session 19 of user core. Sep 4 17:23:34.992710 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:23:35.111936 sshd[4075]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:35.115110 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:60124.service: Deactivated successfully. Sep 4 17:23:35.118318 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:23:35.119975 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:23:35.121150 systemd-logind[1420]: Removed session 19. Sep 4 17:23:38.546387 kubelet[2499]: E0904 17:23:38.544591 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:40.122355 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:60126.service - OpenSSH per-connection server daemon (10.0.0.1:60126). Sep 4 17:23:40.162625 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 60126 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:40.164178 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:40.170242 systemd-logind[1420]: New session 20 of user core. Sep 4 17:23:40.181702 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:23:40.299749 sshd[4089]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:40.305980 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:60126.service: Deactivated successfully. Sep 4 17:23:40.307888 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:23:40.308691 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:23:40.309757 systemd-logind[1420]: Removed session 20. Sep 4 17:23:43.544779 kubelet[2499]: E0904 17:23:43.544742 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:45.313629 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:33002.service - OpenSSH per-connection server daemon (10.0.0.1:33002). Sep 4 17:23:45.359570 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:45.360111 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:45.366612 systemd-logind[1420]: New session 21 of user core. Sep 4 17:23:45.374665 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:23:45.496233 sshd[4103]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:45.506108 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:33002.service: Deactivated successfully. Sep 4 17:23:45.510169 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:23:45.513165 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:23:45.520800 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). Sep 4 17:23:45.523240 systemd-logind[1420]: Removed session 21. Sep 4 17:23:45.560672 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:45.562152 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:45.568407 systemd-logind[1420]: New session 22 of user core. Sep 4 17:23:45.577682 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:23:47.968224 containerd[1435]: time="2024-09-04T17:23:47.968069360Z" level=info msg="StopContainer for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" with timeout 30 (s)" Sep 4 17:23:47.968825 containerd[1435]: time="2024-09-04T17:23:47.968749987Z" level=info msg="Stop container \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" with signal terminated" Sep 4 17:23:47.984025 systemd[1]: cri-containerd-417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42.scope: Deactivated successfully. Sep 4 17:23:47.990449 systemd[1]: run-containerd-runc-k8s.io-6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033-runc.DYbfQf.mount: Deactivated successfully. Sep 4 17:23:48.009607 containerd[1435]: time="2024-09-04T17:23:48.009551044Z" level=info msg="StopContainer for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" with timeout 2 (s)" Sep 4 17:23:48.009861 containerd[1435]: time="2024-09-04T17:23:48.009832279Z" level=info msg="Stop container \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" with signal terminated" Sep 4 17:23:48.018727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42-rootfs.mount: Deactivated successfully. Sep 4 17:23:48.022242 systemd-networkd[1376]: lxc_health: Link DOWN Sep 4 17:23:48.022247 systemd-networkd[1376]: lxc_health: Lost carrier Sep 4 17:23:48.028413 containerd[1435]: time="2024-09-04T17:23:48.028358987Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:23:48.037847 containerd[1435]: time="2024-09-04T17:23:48.037761938Z" level=info msg="shim disconnected" id=417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42 namespace=k8s.io Sep 4 17:23:48.037847 containerd[1435]: time="2024-09-04T17:23:48.037835297Z" level=warning msg="cleaning up after shim disconnected" id=417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42 namespace=k8s.io Sep 4 17:23:48.037847 containerd[1435]: time="2024-09-04T17:23:48.037846737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:48.056959 systemd[1]: cri-containerd-6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033.scope: Deactivated successfully. Sep 4 17:23:48.057310 systemd[1]: cri-containerd-6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033.scope: Consumed 6.917s CPU time. Sep 4 17:23:48.074842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033-rootfs.mount: Deactivated successfully. Sep 4 17:23:48.081092 containerd[1435]: time="2024-09-04T17:23:48.080371575Z" level=info msg="shim disconnected" id=6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033 namespace=k8s.io Sep 4 17:23:48.081092 containerd[1435]: time="2024-09-04T17:23:48.081088842Z" level=warning msg="cleaning up after shim disconnected" id=6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033 namespace=k8s.io Sep 4 17:23:48.081092 containerd[1435]: time="2024-09-04T17:23:48.081100722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:48.083380 containerd[1435]: time="2024-09-04T17:23:48.083211604Z" level=info msg="StopContainer for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" returns successfully" Sep 4 17:23:48.083904 containerd[1435]: time="2024-09-04T17:23:48.083875632Z" level=info msg="StopPodSandbox for \"090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff\"" Sep 4 17:23:48.083959 containerd[1435]: time="2024-09-04T17:23:48.083916671Z" level=info msg="Container to stop \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:23:48.085499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff-shm.mount: Deactivated successfully. Sep 4 17:23:48.092842 systemd[1]: cri-containerd-090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff.scope: Deactivated successfully. Sep 4 17:23:48.097398 containerd[1435]: time="2024-09-04T17:23:48.097319231Z" level=info msg="StopContainer for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" returns successfully" Sep 4 17:23:48.098076 containerd[1435]: time="2024-09-04T17:23:48.098047018Z" level=info msg="StopPodSandbox for \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\"" Sep 4 17:23:48.098150 containerd[1435]: time="2024-09-04T17:23:48.098088937Z" level=info msg="Container to stop \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:23:48.098150 containerd[1435]: time="2024-09-04T17:23:48.098102177Z" level=info msg="Container to stop \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:23:48.098150 containerd[1435]: time="2024-09-04T17:23:48.098111737Z" level=info msg="Container to stop \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:23:48.098150 containerd[1435]: time="2024-09-04T17:23:48.098121456Z" level=info msg="Container to stop \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:23:48.098150 containerd[1435]: time="2024-09-04T17:23:48.098130096Z" level=info msg="Container to stop \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:23:48.103351 systemd[1]: cri-containerd-d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472.scope: Deactivated successfully. Sep 4 17:23:48.131150 containerd[1435]: time="2024-09-04T17:23:48.131089305Z" level=info msg="shim disconnected" id=d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472 namespace=k8s.io Sep 4 17:23:48.131150 containerd[1435]: time="2024-09-04T17:23:48.131146264Z" level=warning msg="cleaning up after shim disconnected" id=d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472 namespace=k8s.io Sep 4 17:23:48.131150 containerd[1435]: time="2024-09-04T17:23:48.131156224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:48.141090 containerd[1435]: time="2024-09-04T17:23:48.141035407Z" level=info msg="shim disconnected" id=090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff namespace=k8s.io Sep 4 17:23:48.141090 containerd[1435]: time="2024-09-04T17:23:48.141087326Z" level=warning msg="cleaning up after shim disconnected" id=090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff namespace=k8s.io Sep 4 17:23:48.141090 containerd[1435]: time="2024-09-04T17:23:48.141096966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:48.149908 containerd[1435]: time="2024-09-04T17:23:48.149857409Z" level=info msg="TearDown network for sandbox \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" successfully" Sep 4 17:23:48.149908 containerd[1435]: time="2024-09-04T17:23:48.149895528Z" level=info msg="StopPodSandbox for \"d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472\" returns successfully" Sep 4 17:23:48.157435 containerd[1435]: time="2024-09-04T17:23:48.157400154Z" level=info msg="TearDown network for sandbox \"090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff\" successfully" Sep 4 17:23:48.157435 containerd[1435]: time="2024-09-04T17:23:48.157431593Z" level=info msg="StopPodSandbox for \"090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff\" returns successfully" Sep 4 17:23:48.301115 kubelet[2499]: I0904 17:23:48.301054 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cni-path\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301115 kubelet[2499]: I0904 17:23:48.301103 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-hubble-tls\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301115 kubelet[2499]: I0904 17:23:48.301126 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94320546-42ea-4da9-a541-174823b30fcf-clustermesh-secrets\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301612 kubelet[2499]: I0904 17:23:48.301147 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dznjs\" (UniqueName: \"kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-kube-api-access-dznjs\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301612 kubelet[2499]: I0904 17:23:48.301167 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-cilium-config-path\") pod \"256c51a5-5b0e-472e-bf3b-7fa93e912fc5\" (UID: \"256c51a5-5b0e-472e-bf3b-7fa93e912fc5\") " Sep 4 17:23:48.301612 kubelet[2499]: I0904 17:23:48.301184 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-etc-cni-netd\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301612 kubelet[2499]: I0904 17:23:48.301201 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-xtables-lock\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301612 kubelet[2499]: I0904 17:23:48.301220 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-bpf-maps\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301612 kubelet[2499]: I0904 17:23:48.301236 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-run\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301746 kubelet[2499]: I0904 17:23:48.301253 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-net\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301746 kubelet[2499]: I0904 17:23:48.301276 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-lib-modules\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301746 kubelet[2499]: I0904 17:23:48.301295 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-kernel\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301746 kubelet[2499]: I0904 17:23:48.301316 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qspjr\" (UniqueName: \"kubernetes.io/projected/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-kube-api-access-qspjr\") pod \"256c51a5-5b0e-472e-bf3b-7fa93e912fc5\" (UID: \"256c51a5-5b0e-472e-bf3b-7fa93e912fc5\") " Sep 4 17:23:48.301746 kubelet[2499]: I0904 17:23:48.301335 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-hostproc\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301746 kubelet[2499]: I0904 17:23:48.301351 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-cgroup\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.301872 kubelet[2499]: I0904 17:23:48.301371 2499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94320546-42ea-4da9-a541-174823b30fcf-cilium-config-path\") pod \"94320546-42ea-4da9-a541-174823b30fcf\" (UID: \"94320546-42ea-4da9-a541-174823b30fcf\") " Sep 4 17:23:48.306012 kubelet[2499]: I0904 17:23:48.304721 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306012 kubelet[2499]: I0904 17:23:48.304794 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306012 kubelet[2499]: I0904 17:23:48.304813 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306012 kubelet[2499]: I0904 17:23:48.304830 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306012 kubelet[2499]: I0904 17:23:48.304846 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306206 kubelet[2499]: I0904 17:23:48.304913 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306206 kubelet[2499]: I0904 17:23:48.304930 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-hostproc" (OuterVolumeSpecName: "hostproc") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306206 kubelet[2499]: I0904 17:23:48.304975 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306206 kubelet[2499]: I0904 17:23:48.305760 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cni-path" (OuterVolumeSpecName: "cni-path") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.306206 kubelet[2499]: I0904 17:23:48.305975 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:23:48.307882 kubelet[2499]: I0904 17:23:48.307372 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94320546-42ea-4da9-a541-174823b30fcf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:23:48.314382 kubelet[2499]: I0904 17:23:48.309154 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "256c51a5-5b0e-472e-bf3b-7fa93e912fc5" (UID: "256c51a5-5b0e-472e-bf3b-7fa93e912fc5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:23:48.316443 kubelet[2499]: I0904 17:23:48.316020 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-kube-api-access-dznjs" (OuterVolumeSpecName: "kube-api-access-dznjs") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "kube-api-access-dznjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:23:48.316443 kubelet[2499]: I0904 17:23:48.316130 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94320546-42ea-4da9-a541-174823b30fcf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:23:48.316443 kubelet[2499]: I0904 17:23:48.316270 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-kube-api-access-qspjr" (OuterVolumeSpecName: "kube-api-access-qspjr") pod "256c51a5-5b0e-472e-bf3b-7fa93e912fc5" (UID: "256c51a5-5b0e-472e-bf3b-7fa93e912fc5"). InnerVolumeSpecName "kube-api-access-qspjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:23:48.316596 kubelet[2499]: I0904 17:23:48.316553 2499 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94320546-42ea-4da9-a541-174823b30fcf" (UID: "94320546-42ea-4da9-a541-174823b30fcf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:23:48.402484 kubelet[2499]: I0904 17:23:48.402433 2499 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402484 kubelet[2499]: I0904 17:23:48.402470 2499 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402484 kubelet[2499]: I0904 17:23:48.402493 2499 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94320546-42ea-4da9-a541-174823b30fcf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402506 2499 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dznjs\" (UniqueName: \"kubernetes.io/projected/94320546-42ea-4da9-a541-174823b30fcf-kube-api-access-dznjs\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402516 2499 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402525 2499 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402534 2499 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402544 2499 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402553 2499 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402563 2499 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402659 kubelet[2499]: I0904 17:23:48.402575 2499 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402836 kubelet[2499]: I0904 17:23:48.402584 2499 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402836 kubelet[2499]: I0904 17:23:48.402593 2499 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qspjr\" (UniqueName: \"kubernetes.io/projected/256c51a5-5b0e-472e-bf3b-7fa93e912fc5-kube-api-access-qspjr\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402836 kubelet[2499]: I0904 17:23:48.402603 2499 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402836 kubelet[2499]: I0904 17:23:48.402612 2499 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94320546-42ea-4da9-a541-174823b30fcf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.402836 kubelet[2499]: I0904 17:23:48.402621 2499 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94320546-42ea-4da9-a541-174823b30fcf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:23:48.556035 systemd[1]: Removed slice kubepods-besteffort-pod256c51a5_5b0e_472e_bf3b_7fa93e912fc5.slice - libcontainer container kubepods-besteffort-pod256c51a5_5b0e_472e_bf3b_7fa93e912fc5.slice. Sep 4 17:23:48.558172 systemd[1]: Removed slice kubepods-burstable-pod94320546_42ea_4da9_a541_174823b30fcf.slice - libcontainer container kubepods-burstable-pod94320546_42ea_4da9_a541_174823b30fcf.slice. Sep 4 17:23:48.558253 systemd[1]: kubepods-burstable-pod94320546_42ea_4da9_a541_174823b30fcf.slice: Consumed 7.046s CPU time. Sep 4 17:23:48.771400 kubelet[2499]: I0904 17:23:48.768024 2499 scope.go:117] "RemoveContainer" containerID="6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033" Sep 4 17:23:48.771807 containerd[1435]: time="2024-09-04T17:23:48.771467467Z" level=info msg="RemoveContainer for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\"" Sep 4 17:23:48.777836 containerd[1435]: time="2024-09-04T17:23:48.777783154Z" level=info msg="RemoveContainer for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" returns successfully" Sep 4 17:23:48.778143 kubelet[2499]: I0904 17:23:48.778108 2499 scope.go:117] "RemoveContainer" containerID="457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d" Sep 4 17:23:48.779191 containerd[1435]: time="2024-09-04T17:23:48.779129410Z" level=info msg="RemoveContainer for \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\"" Sep 4 17:23:48.781542 containerd[1435]: time="2024-09-04T17:23:48.781498727Z" level=info msg="RemoveContainer for \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\" returns successfully" Sep 4 17:23:48.781765 kubelet[2499]: I0904 17:23:48.781725 2499 scope.go:117] "RemoveContainer" containerID="fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f" Sep 4 17:23:48.782933 containerd[1435]: time="2024-09-04T17:23:48.782884742Z" level=info msg="RemoveContainer for \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\"" Sep 4 17:23:48.785662 containerd[1435]: time="2024-09-04T17:23:48.785624733Z" level=info msg="RemoveContainer for \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\" returns successfully" Sep 4 17:23:48.785991 kubelet[2499]: I0904 17:23:48.785851 2499 scope.go:117] "RemoveContainer" containerID="9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c" Sep 4 17:23:48.789238 containerd[1435]: time="2024-09-04T17:23:48.789200149Z" level=info msg="RemoveContainer for \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\"" Sep 4 17:23:48.791580 containerd[1435]: time="2024-09-04T17:23:48.791549627Z" level=info msg="RemoveContainer for \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\" returns successfully" Sep 4 17:23:48.791803 kubelet[2499]: I0904 17:23:48.791778 2499 scope.go:117] "RemoveContainer" containerID="4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44" Sep 4 17:23:48.793547 containerd[1435]: time="2024-09-04T17:23:48.793514712Z" level=info msg="RemoveContainer for \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\"" Sep 4 17:23:48.796186 containerd[1435]: time="2024-09-04T17:23:48.796150624Z" level=info msg="RemoveContainer for \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\" returns successfully" Sep 4 17:23:48.796412 kubelet[2499]: I0904 17:23:48.796387 2499 scope.go:117] "RemoveContainer" containerID="6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033" Sep 4 17:23:48.796932 containerd[1435]: time="2024-09-04T17:23:48.796890691Z" level=error msg="ContainerStatus for \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\": not found" Sep 4 17:23:48.807138 kubelet[2499]: E0904 17:23:48.806995 2499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\": not found" containerID="6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033" Sep 4 17:23:48.807425 kubelet[2499]: I0904 17:23:48.807326 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033"} err="failed to get container status \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b3eade8357165b78972d03f222941b526f0f137de88c0df24b6a6346f0a5033\": not found" Sep 4 17:23:48.807425 kubelet[2499]: I0904 17:23:48.807359 2499 scope.go:117] "RemoveContainer" containerID="457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d" Sep 4 17:23:48.807830 containerd[1435]: time="2024-09-04T17:23:48.807788216Z" level=error msg="ContainerStatus for \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\": not found" Sep 4 17:23:48.808139 kubelet[2499]: E0904 17:23:48.808000 2499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\": not found" containerID="457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d" Sep 4 17:23:48.808139 kubelet[2499]: I0904 17:23:48.808039 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d"} err="failed to get container status \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\": rpc error: code = NotFound desc = an error occurred when try to find container \"457d16fbbe258a775e5143a06525ccfcb813abf56101ab2b9e1bc7e90e61d59d\": not found" Sep 4 17:23:48.808139 kubelet[2499]: I0904 17:23:48.808050 2499 scope.go:117] "RemoveContainer" containerID="fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f" Sep 4 17:23:48.808437 containerd[1435]: time="2024-09-04T17:23:48.808401325Z" level=error msg="ContainerStatus for \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\": not found" Sep 4 17:23:48.808616 kubelet[2499]: E0904 17:23:48.808596 2499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\": not found" containerID="fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f" Sep 4 17:23:48.808651 kubelet[2499]: I0904 17:23:48.808636 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f"} err="failed to get container status \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbfac86ff78800fdd58115122f50fb2e1c8ea4ebe932049a4ae0f6db2a93cc1f\": not found" Sep 4 17:23:48.808651 kubelet[2499]: I0904 17:23:48.808648 2499 scope.go:117] "RemoveContainer" containerID="9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c" Sep 4 17:23:48.808841 containerd[1435]: time="2024-09-04T17:23:48.808804198Z" level=error msg="ContainerStatus for \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\": not found" Sep 4 17:23:48.809003 kubelet[2499]: E0904 17:23:48.808982 2499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\": not found" containerID="9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c" Sep 4 17:23:48.809054 kubelet[2499]: I0904 17:23:48.809029 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c"} err="failed to get container status \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ae367b7e56d9ad775d8b1b1af8a3188ccb7c675ec7d76a89d9529020cdf8d5c\": not found" Sep 4 17:23:48.809054 kubelet[2499]: I0904 17:23:48.809040 2499 scope.go:117] "RemoveContainer" containerID="4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44" Sep 4 17:23:48.809217 containerd[1435]: time="2024-09-04T17:23:48.809170151Z" level=error msg="ContainerStatus for \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\": not found" Sep 4 17:23:48.809388 kubelet[2499]: E0904 17:23:48.809371 2499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\": not found" containerID="4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44" Sep 4 17:23:48.809439 kubelet[2499]: I0904 17:23:48.809397 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44"} err="failed to get container status \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\": rpc error: code = NotFound desc = an error occurred when try to find container \"4837de5bfa32ae9cda21c6acd500eb46432641aff2749be0fe406b6f5c265a44\": not found" Sep 4 17:23:48.809439 kubelet[2499]: I0904 17:23:48.809407 2499 scope.go:117] "RemoveContainer" containerID="417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42" Sep 4 17:23:48.810566 containerd[1435]: time="2024-09-04T17:23:48.810537287Z" level=info msg="RemoveContainer for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\"" Sep 4 17:23:48.813079 containerd[1435]: time="2024-09-04T17:23:48.813042082Z" level=info msg="RemoveContainer for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" returns successfully" Sep 4 17:23:48.813358 kubelet[2499]: I0904 17:23:48.813340 2499 scope.go:117] "RemoveContainer" containerID="417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42" Sep 4 17:23:48.815403 containerd[1435]: time="2024-09-04T17:23:48.815318281Z" level=error msg="ContainerStatus for \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\": not found" Sep 4 17:23:48.815496 kubelet[2499]: E0904 17:23:48.815459 2499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\": not found" containerID="417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42" Sep 4 17:23:48.815543 kubelet[2499]: I0904 17:23:48.815531 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42"} err="failed to get container status \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\": rpc error: code = NotFound desc = an error occurred when try to find container \"417ef58c803d415c02c3800123e97236013e7a4c5e7683bc3f2ce69178957c42\": not found" Sep 4 17:23:48.987185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090cd71d7e0589623cef3b8318e7db6c8250ae54bb55c99d91b4bf8fdcb813ff-rootfs.mount: Deactivated successfully. Sep 4 17:23:48.987309 systemd[1]: var-lib-kubelet-pods-256c51a5\x2d5b0e\x2d472e\x2dbf3b\x2d7fa93e912fc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqspjr.mount: Deactivated successfully. Sep 4 17:23:48.987384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472-rootfs.mount: Deactivated successfully. Sep 4 17:23:48.987441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7cdf8cda0a8f0e9edc9f7056c50efa5e324a224c7f72a28d49f0fa2a389b472-shm.mount: Deactivated successfully. Sep 4 17:23:48.987505 systemd[1]: var-lib-kubelet-pods-94320546\x2d42ea\x2d4da9\x2da541\x2d174823b30fcf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddznjs.mount: Deactivated successfully. Sep 4 17:23:48.987557 systemd[1]: var-lib-kubelet-pods-94320546\x2d42ea\x2d4da9\x2da541\x2d174823b30fcf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:23:48.987607 systemd[1]: var-lib-kubelet-pods-94320546\x2d42ea\x2d4da9\x2da541\x2d174823b30fcf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:23:49.920250 sshd[4117]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:49.933637 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:33014.service: Deactivated successfully. Sep 4 17:23:49.937313 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:23:49.937619 systemd[1]: session-22.scope: Consumed 1.692s CPU time. Sep 4 17:23:49.939063 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:23:49.945782 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:33024.service - OpenSSH per-connection server daemon (10.0.0.1:33024). Sep 4 17:23:49.947021 systemd-logind[1420]: Removed session 22. Sep 4 17:23:49.985158 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 33024 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:49.986716 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:49.990966 systemd-logind[1420]: New session 23 of user core. Sep 4 17:23:50.005650 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:23:50.545116 kubelet[2499]: E0904 17:23:50.545080 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:50.548395 kubelet[2499]: I0904 17:23:50.548241 2499 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="256c51a5-5b0e-472e-bf3b-7fa93e912fc5" path="/var/lib/kubelet/pods/256c51a5-5b0e-472e-bf3b-7fa93e912fc5/volumes" Sep 4 17:23:50.548683 kubelet[2499]: I0904 17:23:50.548667 2499 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="94320546-42ea-4da9-a541-174823b30fcf" path="/var/lib/kubelet/pods/94320546-42ea-4da9-a541-174823b30fcf/volumes" Sep 4 17:23:50.622089 kubelet[2499]: E0904 17:23:50.622057 2499 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:23:51.243535 sshd[4278]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:51.252271 kubelet[2499]: I0904 17:23:51.252230 2499 topology_manager.go:215] "Topology Admit Handler" podUID="eff57270-6b0e-4acb-950b-5105907db03d" podNamespace="kube-system" podName="cilium-bvxkd" Sep 4 17:23:51.252379 kubelet[2499]: E0904 17:23:51.252287 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94320546-42ea-4da9-a541-174823b30fcf" containerName="mount-bpf-fs" Sep 4 17:23:51.252379 kubelet[2499]: E0904 17:23:51.252300 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94320546-42ea-4da9-a541-174823b30fcf" containerName="clean-cilium-state" Sep 4 17:23:51.252379 kubelet[2499]: E0904 17:23:51.252310 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94320546-42ea-4da9-a541-174823b30fcf" containerName="mount-cgroup" Sep 4 17:23:51.252379 kubelet[2499]: E0904 17:23:51.252316 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94320546-42ea-4da9-a541-174823b30fcf" containerName="apply-sysctl-overwrites" Sep 4 17:23:51.252379 kubelet[2499]: E0904 17:23:51.252323 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="256c51a5-5b0e-472e-bf3b-7fa93e912fc5" containerName="cilium-operator" Sep 4 17:23:51.252379 kubelet[2499]: E0904 17:23:51.252330 2499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94320546-42ea-4da9-a541-174823b30fcf" containerName="cilium-agent" Sep 4 17:23:51.252379 kubelet[2499]: I0904 17:23:51.252353 2499 memory_manager.go:346] "RemoveStaleState removing state" podUID="94320546-42ea-4da9-a541-174823b30fcf" containerName="cilium-agent" Sep 4 17:23:51.252379 kubelet[2499]: I0904 17:23:51.252362 2499 memory_manager.go:346] "RemoveStaleState removing state" podUID="256c51a5-5b0e-472e-bf3b-7fa93e912fc5" containerName="cilium-operator" Sep 4 17:23:51.256657 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:33024.service: Deactivated successfully. Sep 4 17:23:51.262462 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:23:51.265802 systemd[1]: session-23.scope: Consumed 1.157s CPU time. Sep 4 17:23:51.272212 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:23:51.276650 systemd-logind[1420]: Removed session 23. Sep 4 17:23:51.286789 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:33032.service - OpenSSH per-connection server daemon (10.0.0.1:33032). Sep 4 17:23:51.291163 systemd[1]: Created slice kubepods-burstable-podeff57270_6b0e_4acb_950b_5105907db03d.slice - libcontainer container kubepods-burstable-podeff57270_6b0e_4acb_950b_5105907db03d.slice. Sep 4 17:23:51.320488 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 33032 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:51.321869 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:51.325370 systemd-logind[1420]: New session 24 of user core. Sep 4 17:23:51.335615 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:23:51.386824 sshd[4291]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:51.400107 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:33032.service: Deactivated successfully. Sep 4 17:23:51.401974 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:23:51.404594 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:23:51.418771 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:33034.service - OpenSSH per-connection server daemon (10.0.0.1:33034). Sep 4 17:23:51.419373 kubelet[2499]: I0904 17:23:51.419255 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-cilium-run\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419373 kubelet[2499]: I0904 17:23:51.419353 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eff57270-6b0e-4acb-950b-5105907db03d-clustermesh-secrets\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419655 kubelet[2499]: I0904 17:23:51.419521 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-cilium-cgroup\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419655 kubelet[2499]: I0904 17:23:51.419604 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-xtables-lock\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419655 kubelet[2499]: I0904 17:23:51.419628 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-hostproc\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419887 kubelet[2499]: I0904 17:23:51.419770 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eff57270-6b0e-4acb-950b-5105907db03d-cilium-ipsec-secrets\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419887 kubelet[2499]: I0904 17:23:51.419811 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-cni-path\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419887 kubelet[2499]: I0904 17:23:51.419831 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-etc-cni-netd\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419887 kubelet[2499]: I0904 17:23:51.419852 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eff57270-6b0e-4acb-950b-5105907db03d-cilium-config-path\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.419887 kubelet[2499]: I0904 17:23:51.419872 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eff57270-6b0e-4acb-950b-5105907db03d-hubble-tls\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.420008 kubelet[2499]: I0904 17:23:51.419922 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtbs\" (UniqueName: \"kubernetes.io/projected/eff57270-6b0e-4acb-950b-5105907db03d-kube-api-access-wdtbs\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.420008 kubelet[2499]: I0904 17:23:51.419956 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-lib-modules\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.420008 kubelet[2499]: I0904 17:23:51.419975 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-bpf-maps\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.420008 kubelet[2499]: I0904 17:23:51.420004 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-host-proc-sys-net\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.420092 kubelet[2499]: I0904 17:23:51.420024 2499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eff57270-6b0e-4acb-950b-5105907db03d-host-proc-sys-kernel\") pod \"cilium-bvxkd\" (UID: \"eff57270-6b0e-4acb-950b-5105907db03d\") " pod="kube-system/cilium-bvxkd" Sep 4 17:23:51.420168 systemd-logind[1420]: Removed session 24. Sep 4 17:23:51.450090 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 33034 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:23:51.451367 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:23:51.457535 systemd-logind[1420]: New session 25 of user core. Sep 4 17:23:51.463646 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:23:51.595125 kubelet[2499]: E0904 17:23:51.595094 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:51.596967 containerd[1435]: time="2024-09-04T17:23:51.596919632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvxkd,Uid:eff57270-6b0e-4acb-950b-5105907db03d,Namespace:kube-system,Attempt:0,}" Sep 4 17:23:51.616518 containerd[1435]: time="2024-09-04T17:23:51.616072350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:51.616518 containerd[1435]: time="2024-09-04T17:23:51.616506302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:51.616518 containerd[1435]: time="2024-09-04T17:23:51.616521662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:51.616725 containerd[1435]: time="2024-09-04T17:23:51.616601061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:51.632662 systemd[1]: Started cri-containerd-2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be.scope - libcontainer container 2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be. Sep 4 17:23:51.651357 containerd[1435]: time="2024-09-04T17:23:51.651317436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvxkd,Uid:eff57270-6b0e-4acb-950b-5105907db03d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\"" Sep 4 17:23:51.652079 kubelet[2499]: E0904 17:23:51.652058 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:51.665843 containerd[1435]: time="2024-09-04T17:23:51.665751552Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:23:51.674999 containerd[1435]: time="2024-09-04T17:23:51.674942717Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1\"" Sep 4 17:23:51.675504 containerd[1435]: time="2024-09-04T17:23:51.675462789Z" level=info msg="StartContainer for \"df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1\"" Sep 4 17:23:51.697665 systemd[1]: Started cri-containerd-df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1.scope - libcontainer container df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1. Sep 4 17:23:51.718593 containerd[1435]: time="2024-09-04T17:23:51.716724253Z" level=info msg="StartContainer for \"df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1\" returns successfully" Sep 4 17:23:51.732713 systemd[1]: cri-containerd-df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1.scope: Deactivated successfully. Sep 4 17:23:51.758811 containerd[1435]: time="2024-09-04T17:23:51.758734985Z" level=info msg="shim disconnected" id=df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1 namespace=k8s.io Sep 4 17:23:51.758811 containerd[1435]: time="2024-09-04T17:23:51.758796664Z" level=warning msg="cleaning up after shim disconnected" id=df923220bb92a8da71dabef47bf4c6245e891885dd15041bdd50754a909787f1 namespace=k8s.io Sep 4 17:23:51.758811 containerd[1435]: time="2024-09-04T17:23:51.758805384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:51.782670 kubelet[2499]: E0904 17:23:51.782636 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:51.786153 containerd[1435]: time="2024-09-04T17:23:51.785746170Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:23:51.795788 containerd[1435]: time="2024-09-04T17:23:51.795740321Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac\"" Sep 4 17:23:51.796265 containerd[1435]: time="2024-09-04T17:23:51.796175554Z" level=info msg="StartContainer for \"9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac\"" Sep 4 17:23:51.823678 systemd[1]: Started cri-containerd-9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac.scope - libcontainer container 9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac. Sep 4 17:23:51.844940 containerd[1435]: time="2024-09-04T17:23:51.844884733Z" level=info msg="StartContainer for \"9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac\" returns successfully" Sep 4 17:23:51.853173 systemd[1]: cri-containerd-9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac.scope: Deactivated successfully. Sep 4 17:23:51.877587 containerd[1435]: time="2024-09-04T17:23:51.877526543Z" level=info msg="shim disconnected" id=9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac namespace=k8s.io Sep 4 17:23:51.877587 containerd[1435]: time="2024-09-04T17:23:51.877578982Z" level=warning msg="cleaning up after shim disconnected" id=9832cb710ac10595cc3e176f991b95d1d466645b78f5dee2f162334eaf1663ac namespace=k8s.io Sep 4 17:23:51.877587 containerd[1435]: time="2024-09-04T17:23:51.877587582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:52.120754 kubelet[2499]: I0904 17:23:52.120630 2499 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:23:52Z","lastTransitionTime":"2024-09-04T17:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:23:52.786720 kubelet[2499]: E0904 17:23:52.786470 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:52.791250 containerd[1435]: time="2024-09-04T17:23:52.791189488Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:23:52.801941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327471280.mount: Deactivated successfully. Sep 4 17:23:52.803970 containerd[1435]: time="2024-09-04T17:23:52.803929918Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe\"" Sep 4 17:23:52.805603 containerd[1435]: time="2024-09-04T17:23:52.805560411Z" level=info msg="StartContainer for \"26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe\"" Sep 4 17:23:52.835641 systemd[1]: Started cri-containerd-26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe.scope - libcontainer container 26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe. Sep 4 17:23:52.857553 containerd[1435]: time="2024-09-04T17:23:52.857502273Z" level=info msg="StartContainer for \"26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe\" returns successfully" Sep 4 17:23:52.858626 systemd[1]: cri-containerd-26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe.scope: Deactivated successfully. Sep 4 17:23:52.881510 containerd[1435]: time="2024-09-04T17:23:52.881414798Z" level=info msg="shim disconnected" id=26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe namespace=k8s.io Sep 4 17:23:52.881510 containerd[1435]: time="2024-09-04T17:23:52.881504516Z" level=warning msg="cleaning up after shim disconnected" id=26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe namespace=k8s.io Sep 4 17:23:52.881510 containerd[1435]: time="2024-09-04T17:23:52.881515116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:53.527564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26ffacb9bfd0ff5ac73faadba2067700c06c9136297768e2821dce3f89d293fe-rootfs.mount: Deactivated successfully. Sep 4 17:23:53.790671 kubelet[2499]: E0904 17:23:53.790567 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:53.794370 containerd[1435]: time="2024-09-04T17:23:53.793945380Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:23:53.809190 containerd[1435]: time="2024-09-04T17:23:53.809131094Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d\"" Sep 4 17:23:53.809800 containerd[1435]: time="2024-09-04T17:23:53.809652926Z" level=info msg="StartContainer for \"7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d\"" Sep 4 17:23:53.835717 systemd[1]: Started cri-containerd-7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d.scope - libcontainer container 7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d. Sep 4 17:23:53.854901 systemd[1]: cri-containerd-7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d.scope: Deactivated successfully. Sep 4 17:23:53.856438 containerd[1435]: time="2024-09-04T17:23:53.856370729Z" level=info msg="StartContainer for \"7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d\" returns successfully" Sep 4 17:23:53.877068 containerd[1435]: time="2024-09-04T17:23:53.876857917Z" level=info msg="shim disconnected" id=7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d namespace=k8s.io Sep 4 17:23:53.877068 containerd[1435]: time="2024-09-04T17:23:53.876913636Z" level=warning msg="cleaning up after shim disconnected" id=7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d namespace=k8s.io Sep 4 17:23:53.877068 containerd[1435]: time="2024-09-04T17:23:53.876921436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:54.527654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c115b128ee045b88e1d7869c907890ea5dae522cf03e96d06129db1d8bbc67d-rootfs.mount: Deactivated successfully. Sep 4 17:23:54.799048 kubelet[2499]: E0904 17:23:54.798935 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:54.806326 containerd[1435]: time="2024-09-04T17:23:54.806194519Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:23:54.830927 containerd[1435]: time="2024-09-04T17:23:54.830881167Z" level=info msg="CreateContainer within sandbox \"2264d10dc25a12f28ce6f0b6f722526f8e7d22173119a81801aea214ab4393be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d45082d02e1769eb1c6700313dc450930940184c8c8aa151b0754f074f0f8e0\"" Sep 4 17:23:54.831892 containerd[1435]: time="2024-09-04T17:23:54.831859871Z" level=info msg="StartContainer for \"4d45082d02e1769eb1c6700313dc450930940184c8c8aa151b0754f074f0f8e0\"" Sep 4 17:23:54.862661 systemd[1]: Started cri-containerd-4d45082d02e1769eb1c6700313dc450930940184c8c8aa151b0754f074f0f8e0.scope - libcontainer container 4d45082d02e1769eb1c6700313dc450930940184c8c8aa151b0754f074f0f8e0. Sep 4 17:23:54.896257 containerd[1435]: time="2024-09-04T17:23:54.896209810Z" level=info msg="StartContainer for \"4d45082d02e1769eb1c6700313dc450930940184c8c8aa151b0754f074f0f8e0\" returns successfully" Sep 4 17:23:55.154655 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 17:23:55.803687 kubelet[2499]: E0904 17:23:55.803655 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:55.817316 kubelet[2499]: I0904 17:23:55.817269 2499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bvxkd" podStartSLOduration=4.817213671 podCreationTimestamp="2024-09-04 17:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:55.816405684 +0000 UTC m=+85.378491977" watchObservedRunningTime="2024-09-04 17:23:55.817213671 +0000 UTC m=+85.379299924" Sep 4 17:23:57.597279 kubelet[2499]: E0904 17:23:57.597170 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:57.941561 systemd-networkd[1376]: lxc_health: Link UP Sep 4 17:23:57.952953 systemd-networkd[1376]: lxc_health: Gained carrier Sep 4 17:23:58.546040 kubelet[2499]: E0904 17:23:58.545152 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:59.331661 systemd-networkd[1376]: lxc_health: Gained IPv6LL Sep 4 17:23:59.597153 kubelet[2499]: E0904 17:23:59.597045 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:23:59.810641 kubelet[2499]: E0904 17:23:59.810607 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:24:04.182749 sshd[4299]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:04.187409 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:33034.service: Deactivated successfully. Sep 4 17:24:04.189231 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:24:04.190959 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:24:04.191754 systemd-logind[1420]: Removed session 25.