Jul 2 09:24:07.918593 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:24:07.918613 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:24:07.918623 kernel: KASLR enabled Jul 2 09:24:07.918629 kernel: efi: EFI v2.7 by EDK II Jul 2 09:24:07.918634 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 09:24:07.918640 kernel: random: crng init done Jul 2 09:24:07.918647 kernel: ACPI: Early table checksum verification disabled Jul 2 09:24:07.918653 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 09:24:07.918659 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 09:24:07.918667 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918673 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918679 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918685 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918691 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918699 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918706 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918713 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918720 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:24:07.918726 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 09:24:07.918733 kernel: NUMA: Failed to initialise from firmware Jul 2 09:24:07.918748 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:24:07.918755 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 2 09:24:07.918761 kernel: Zone ranges: Jul 2 09:24:07.918767 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:24:07.918774 kernel: DMA32 empty Jul 2 09:24:07.918782 kernel: Normal empty Jul 2 09:24:07.918788 kernel: Movable zone start for each node Jul 2 09:24:07.918794 kernel: Early memory node ranges Jul 2 09:24:07.918801 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 09:24:07.918807 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 09:24:07.918814 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 09:24:07.918820 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 09:24:07.918826 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 09:24:07.918833 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 09:24:07.918839 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 09:24:07.918845 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:24:07.918851 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 09:24:07.918859 kernel: psci: probing for conduit method from ACPI. Jul 2 09:24:07.918865 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:24:07.918872 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:24:07.918881 kernel: psci: Trusted OS migration not required Jul 2 09:24:07.918888 kernel: psci: SMC Calling Convention v1.1 Jul 2 09:24:07.918895 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 09:24:07.918903 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:24:07.918910 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:24:07.918917 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 09:24:07.918924 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:24:07.918931 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:24:07.918938 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:24:07.918945 kernel: CPU features: detected: Spectre-v4 Jul 2 09:24:07.918951 kernel: CPU features: detected: Spectre-BHB Jul 2 09:24:07.918958 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:24:07.918965 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:24:07.918973 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:24:07.918980 kernel: alternatives: applying boot alternatives Jul 2 09:24:07.918987 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:24:07.918994 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:24:07.919001 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:24:07.919008 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:24:07.919014 kernel: Fallback order for Node 0: 0 Jul 2 09:24:07.919021 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 09:24:07.919028 kernel: Policy zone: DMA Jul 2 09:24:07.919051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:24:07.919058 kernel: software IO TLB: area num 4. Jul 2 09:24:07.919066 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 09:24:07.919074 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jul 2 09:24:07.919080 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 09:24:07.919087 kernel: trace event string verifier disabled Jul 2 09:24:07.919094 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:24:07.919101 kernel: rcu: RCU event tracing is enabled. Jul 2 09:24:07.919108 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 09:24:07.919114 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:24:07.919121 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:24:07.919128 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:24:07.919135 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 09:24:07.919142 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:24:07.919150 kernel: GICv3: 256 SPIs implemented Jul 2 09:24:07.919157 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:24:07.919164 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:24:07.919171 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:24:07.919178 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 09:24:07.919185 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 09:24:07.919192 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 09:24:07.919198 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 09:24:07.919205 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 09:24:07.919212 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 09:24:07.919219 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:24:07.919227 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:24:07.919234 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:24:07.919240 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:24:07.919247 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:24:07.919254 kernel: arm-pv: using stolen time PV Jul 2 09:24:07.919261 kernel: Console: colour dummy device 80x25 Jul 2 09:24:07.919269 kernel: ACPI: Core revision 20230628 Jul 2 09:24:07.919276 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:24:07.919283 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:24:07.919290 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:24:07.919298 kernel: SELinux: Initializing. Jul 2 09:24:07.919305 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:24:07.919312 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:24:07.919319 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:24:07.919327 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:24:07.919334 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:24:07.919340 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:24:07.919348 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 09:24:07.919354 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 09:24:07.919363 kernel: Remapping and enabling EFI services. Jul 2 09:24:07.919370 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:24:07.919376 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:24:07.919384 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 09:24:07.919391 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 09:24:07.919397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:24:07.919404 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:24:07.919411 kernel: Detected PIPT I-cache on CPU2 Jul 2 09:24:07.919418 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 09:24:07.919426 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 09:24:07.919434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:24:07.919441 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 09:24:07.919453 kernel: Detected PIPT I-cache on CPU3 Jul 2 09:24:07.919462 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 09:24:07.919469 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 09:24:07.919476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:24:07.919483 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 09:24:07.919490 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 09:24:07.919497 kernel: SMP: Total of 4 processors activated. Jul 2 09:24:07.919506 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:24:07.919514 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:24:07.919521 kernel: CPU features: detected: Common not Private translations Jul 2 09:24:07.919529 kernel: CPU features: detected: CRC32 instructions Jul 2 09:24:07.919536 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 09:24:07.919543 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:24:07.919551 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:24:07.919558 kernel: CPU features: detected: Privileged Access Never Jul 2 09:24:07.919566 kernel: CPU features: detected: RAS Extension Support Jul 2 09:24:07.919574 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 09:24:07.919581 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:24:07.919588 kernel: alternatives: applying system-wide alternatives Jul 2 09:24:07.919596 kernel: devtmpfs: initialized Jul 2 09:24:07.919603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:24:07.919611 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 09:24:07.919618 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:24:07.919625 kernel: SMBIOS 3.0.0 present. Jul 2 09:24:07.919634 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 09:24:07.919642 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:24:07.919649 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:24:07.919656 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:24:07.919664 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:24:07.919671 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:24:07.919679 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 2 09:24:07.919686 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:24:07.919693 kernel: cpuidle: using governor menu Jul 2 09:24:07.919701 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:24:07.919708 kernel: ASID allocator initialised with 32768 entries Jul 2 09:24:07.919716 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:24:07.919723 kernel: Serial: AMBA PL011 UART driver Jul 2 09:24:07.919730 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:24:07.919742 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:24:07.919750 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:24:07.919758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:24:07.919765 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:24:07.919774 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:24:07.919781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:24:07.919789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:24:07.919796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:24:07.919803 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:24:07.919810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:24:07.919817 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:24:07.919825 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:24:07.919832 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:24:07.919841 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:24:07.919848 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:24:07.919856 kernel: ACPI: Interpreter enabled Jul 2 09:24:07.919875 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:24:07.919883 kernel: ACPI: MCFG table detected, 1 entries Jul 2 09:24:07.919890 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:24:07.919897 kernel: printk: console [ttyAMA0] enabled Jul 2 09:24:07.919905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 09:24:07.920049 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 09:24:07.920137 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 09:24:07.920207 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 09:24:07.920279 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 09:24:07.920352 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 09:24:07.920363 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 09:24:07.920370 kernel: PCI host bridge to bus 0000:00 Jul 2 09:24:07.920448 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 09:24:07.920515 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 09:24:07.920575 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 09:24:07.920635 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 09:24:07.920718 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 09:24:07.920810 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 09:24:07.920887 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 09:24:07.920959 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 09:24:07.921028 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:24:07.921115 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:24:07.921183 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 09:24:07.921252 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 09:24:07.921313 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 09:24:07.921371 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 09:24:07.921432 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 09:24:07.921442 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 09:24:07.921450 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 09:24:07.921457 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 09:24:07.921464 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 09:24:07.921471 kernel: iommu: Default domain type: Translated Jul 2 09:24:07.921478 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:24:07.921486 kernel: efivars: Registered efivars operations Jul 2 09:24:07.921493 kernel: vgaarb: loaded Jul 2 09:24:07.921502 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:24:07.921510 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:24:07.921517 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:24:07.921524 kernel: pnp: PnP ACPI init Jul 2 09:24:07.921598 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 09:24:07.921610 kernel: pnp: PnP ACPI: found 1 devices Jul 2 09:24:07.921617 kernel: NET: Registered PF_INET protocol family Jul 2 09:24:07.921625 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:24:07.921634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:24:07.921642 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:24:07.921649 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:24:07.921657 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:24:07.921664 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:24:07.921672 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:24:07.921679 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:24:07.921686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:24:07.921694 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:24:07.921703 kernel: kvm [1]: HYP mode not available Jul 2 09:24:07.921710 kernel: Initialise system trusted keyrings Jul 2 09:24:07.921718 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:24:07.921725 kernel: Key type asymmetric registered Jul 2 09:24:07.921733 kernel: Asymmetric key parser 'x509' registered Jul 2 09:24:07.921747 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:24:07.921755 kernel: io scheduler mq-deadline registered Jul 2 09:24:07.921762 kernel: io scheduler kyber registered Jul 2 09:24:07.921770 kernel: io scheduler bfq registered Jul 2 09:24:07.921779 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 09:24:07.921787 kernel: ACPI: button: Power Button [PWRB] Jul 2 09:24:07.921794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 09:24:07.921865 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 09:24:07.921875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:24:07.921883 kernel: thunder_xcv, ver 1.0 Jul 2 09:24:07.921890 kernel: thunder_bgx, ver 1.0 Jul 2 09:24:07.921897 kernel: nicpf, ver 1.0 Jul 2 09:24:07.921904 kernel: nicvf, ver 1.0 Jul 2 09:24:07.921981 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:24:07.922128 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:24:07 UTC (1719912247) Jul 2 09:24:07.922140 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:24:07.922148 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 09:24:07.922155 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:24:07.922163 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:24:07.922170 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:24:07.922178 kernel: Segment Routing with IPv6 Jul 2 09:24:07.922189 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:24:07.922197 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:24:07.922204 kernel: Key type dns_resolver registered Jul 2 09:24:07.922211 kernel: registered taskstats version 1 Jul 2 09:24:07.922219 kernel: Loading compiled-in X.509 certificates Jul 2 09:24:07.922226 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:24:07.922234 kernel: Key type .fscrypt registered Jul 2 09:24:07.922241 kernel: Key type fscrypt-provisioning registered Jul 2 09:24:07.922248 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:24:07.922257 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:24:07.922264 kernel: ima: No architecture policies found Jul 2 09:24:07.922272 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:24:07.922279 kernel: clk: Disabling unused clocks Jul 2 09:24:07.922287 kernel: Freeing unused kernel memory: 39040K Jul 2 09:24:07.922294 kernel: Run /init as init process Jul 2 09:24:07.922301 kernel: with arguments: Jul 2 09:24:07.922309 kernel: /init Jul 2 09:24:07.922316 kernel: with environment: Jul 2 09:24:07.922324 kernel: HOME=/ Jul 2 09:24:07.922332 kernel: TERM=linux Jul 2 09:24:07.922339 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:24:07.922348 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:24:07.922357 systemd[1]: Detected virtualization kvm. Jul 2 09:24:07.922365 systemd[1]: Detected architecture arm64. Jul 2 09:24:07.922373 systemd[1]: Running in initrd. Jul 2 09:24:07.922380 systemd[1]: No hostname configured, using default hostname. Jul 2 09:24:07.922390 systemd[1]: Hostname set to . Jul 2 09:24:07.922398 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:24:07.922406 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:24:07.922414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:24:07.922422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:24:07.922430 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:24:07.922438 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:24:07.922446 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:24:07.922456 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:24:07.922466 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:24:07.922474 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:24:07.922482 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:24:07.922490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:24:07.922498 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:24:07.922508 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:24:07.922516 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:24:07.922524 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:24:07.922532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:24:07.922539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:24:07.922547 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:24:07.922555 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:24:07.922575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:24:07.922583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:24:07.922593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:24:07.922601 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:24:07.922608 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:24:07.922616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:24:07.922625 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:24:07.922632 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:24:07.922640 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:24:07.922648 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:24:07.922656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:24:07.922665 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:24:07.922674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:24:07.922682 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:24:07.922690 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:24:07.922715 systemd-journald[238]: Collecting audit messages is disabled. Jul 2 09:24:07.922734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:24:07.922751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:24:07.922760 systemd-journald[238]: Journal started Jul 2 09:24:07.922780 systemd-journald[238]: Runtime Journal (/run/log/journal/de669de41b8e4489b9b08a04275981dd) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:24:07.914435 systemd-modules-load[239]: Inserted module 'overlay' Jul 2 09:24:07.926173 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:24:07.926534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:24:07.929865 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:24:07.930632 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 2 09:24:07.931309 kernel: Bridge firewalling registered Jul 2 09:24:07.938211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:24:07.940613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:24:07.941651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:24:07.944623 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:24:07.947368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:24:07.948550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:24:07.951215 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:24:07.954441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:24:07.958094 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:24:07.962589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:24:07.968066 dracut-cmdline[272]: dracut-dracut-053 Jul 2 09:24:07.972065 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:24:07.991000 systemd-resolved[277]: Positive Trust Anchors: Jul 2 09:24:07.991015 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:24:07.991102 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:24:07.995671 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 2 09:24:07.996665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:24:07.999639 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:24:08.040064 kernel: SCSI subsystem initialized Jul 2 09:24:08.045055 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:24:08.053069 kernel: iscsi: registered transport (tcp) Jul 2 09:24:08.066055 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:24:08.066074 kernel: QLogic iSCSI HBA Driver Jul 2 09:24:08.115147 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:24:08.123198 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:24:08.142180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:24:08.142234 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:24:08.143433 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:24:08.190083 kernel: raid6: neonx8 gen() 15536 MB/s Jul 2 09:24:08.207061 kernel: raid6: neonx4 gen() 15654 MB/s Jul 2 09:24:08.224056 kernel: raid6: neonx2 gen() 13237 MB/s Jul 2 09:24:08.241058 kernel: raid6: neonx1 gen() 10472 MB/s Jul 2 09:24:08.258070 kernel: raid6: int64x8 gen() 6946 MB/s Jul 2 09:24:08.275068 kernel: raid6: int64x4 gen() 7327 MB/s Jul 2 09:24:08.292070 kernel: raid6: int64x2 gen() 6109 MB/s Jul 2 09:24:08.309070 kernel: raid6: int64x1 gen() 5046 MB/s Jul 2 09:24:08.309103 kernel: raid6: using algorithm neonx4 gen() 15654 MB/s Jul 2 09:24:08.326077 kernel: raid6: .... xor() 12102 MB/s, rmw enabled Jul 2 09:24:08.326110 kernel: raid6: using neon recovery algorithm Jul 2 09:24:08.331064 kernel: xor: measuring software checksum speed Jul 2 09:24:08.331098 kernel: 8regs : 19735 MB/sec Jul 2 09:24:08.332295 kernel: 32regs : 19678 MB/sec Jul 2 09:24:08.333485 kernel: arm64_neon : 27116 MB/sec Jul 2 09:24:08.333512 kernel: xor: using function: arm64_neon (27116 MB/sec) Jul 2 09:24:08.385066 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:24:08.397488 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:24:08.405884 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:24:08.417298 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 2 09:24:08.420384 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:24:08.424223 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:24:08.440525 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 2 09:24:08.465247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:24:08.477242 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:24:08.517460 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:24:08.525222 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:24:08.537947 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:24:08.540450 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:24:08.542654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:24:08.544631 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:24:08.553346 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:24:08.559927 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 09:24:08.568354 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 09:24:08.568501 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 09:24:08.568513 kernel: GPT:9289727 != 19775487 Jul 2 09:24:08.568523 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 09:24:08.568532 kernel: GPT:9289727 != 19775487 Jul 2 09:24:08.568540 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 09:24:08.568549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:24:08.565065 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:24:08.566490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:24:08.566597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:24:08.571972 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:24:08.572886 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:24:08.573018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:24:08.574678 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:24:08.584266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:24:08.597368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:24:08.601058 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (521) Jul 2 09:24:08.601081 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (524) Jul 2 09:24:08.608891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 09:24:08.616277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 09:24:08.620083 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 09:24:08.620936 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 09:24:08.626794 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:24:08.639168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:24:08.641079 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:24:08.651270 disk-uuid[551]: Primary Header is updated. Jul 2 09:24:08.651270 disk-uuid[551]: Secondary Entries is updated. Jul 2 09:24:08.651270 disk-uuid[551]: Secondary Header is updated. Jul 2 09:24:08.655137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:24:08.659056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:24:09.669080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:24:09.669655 disk-uuid[559]: The operation has completed successfully. Jul 2 09:24:09.699477 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:24:09.699572 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:24:09.722228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:24:09.725721 sh[575]: Success Jul 2 09:24:09.741324 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:24:09.775164 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:24:09.787417 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:24:09.791069 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:24:09.812659 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:24:09.812703 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:24:09.812714 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:24:09.814222 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:24:09.815499 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:24:09.820426 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:24:09.821503 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:24:09.833180 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:24:09.835908 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:24:09.849483 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:24:09.849524 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:24:09.849535 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:24:09.854188 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:24:09.861804 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:24:09.863083 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:24:09.869055 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:24:09.874002 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:24:09.944066 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:24:09.961209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:24:09.990335 ignition[672]: Ignition 2.18.0 Jul 2 09:24:09.990351 ignition[672]: Stage: fetch-offline Jul 2 09:24:09.991013 systemd-networkd[766]: lo: Link UP Jul 2 09:24:09.990390 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:24:09.991016 systemd-networkd[766]: lo: Gained carrier Jul 2 09:24:09.990399 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:24:09.991695 systemd-networkd[766]: Enumeration completed Jul 2 09:24:09.990482 ignition[672]: parsed url from cmdline: "" Jul 2 09:24:09.991980 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:24:09.990485 ignition[672]: no config URL provided Jul 2 09:24:09.992118 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:24:09.990491 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:24:09.992121 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:24:09.990498 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:24:09.992767 systemd-networkd[766]: eth0: Link UP Jul 2 09:24:09.990520 ignition[672]: op(1): [started] loading QEMU firmware config module Jul 2 09:24:09.992770 systemd-networkd[766]: eth0: Gained carrier Jul 2 09:24:09.990524 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 09:24:09.992777 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:24:10.000931 ignition[672]: op(1): [finished] loading QEMU firmware config module Jul 2 09:24:09.994724 systemd[1]: Reached target network.target - Network. Jul 2 09:24:10.015081 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:24:10.048727 ignition[672]: parsing config with SHA512: 8a093fcd09dd24f59ff24409b5b1fa852f7a44ad673b37471df7b32d744c19c0105c95ec139478ec10c18755f045650ed148bd9cf61026740ba8877d3412313a Jul 2 09:24:10.053451 unknown[672]: fetched base config from "system" Jul 2 09:24:10.053463 unknown[672]: fetched user config from "qemu" Jul 2 09:24:10.053938 ignition[672]: fetch-offline: fetch-offline passed Jul 2 09:24:10.056481 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:24:10.054001 ignition[672]: Ignition finished successfully Jul 2 09:24:10.057649 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 09:24:10.067237 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:24:10.080524 ignition[773]: Ignition 2.18.0 Jul 2 09:24:10.080531 ignition[773]: Stage: kargs Jul 2 09:24:10.080748 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:24:10.081919 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:24:10.086057 ignition[773]: kargs: kargs passed Jul 2 09:24:10.086115 ignition[773]: Ignition finished successfully Jul 2 09:24:10.089299 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:24:10.099231 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:24:10.111114 ignition[782]: Ignition 2.18.0 Jul 2 09:24:10.111123 ignition[782]: Stage: disks Jul 2 09:24:10.111284 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:24:10.113913 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:24:10.111294 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:24:10.115323 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:24:10.112212 ignition[782]: disks: disks passed Jul 2 09:24:10.116716 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:24:10.112264 ignition[782]: Ignition finished successfully Jul 2 09:24:10.118425 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:24:10.120007 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:24:10.121237 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:24:10.127195 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:24:10.142742 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 09:24:10.150069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:24:10.157154 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:24:10.205070 kernel: EXT4-fs (vda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:24:10.205401 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:24:10.206532 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:24:10.220135 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:24:10.224336 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:24:10.225241 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 09:24:10.229079 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jul 2 09:24:10.225285 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:24:10.225309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:24:10.234431 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:24:10.234454 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:24:10.234465 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:24:10.232672 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:24:10.236507 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:24:10.238299 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:24:10.239212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:24:10.281669 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:24:10.285386 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:24:10.289188 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:24:10.293173 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:24:10.384016 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:24:10.397202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:24:10.398625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:24:10.404059 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:24:10.419965 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:24:10.422105 ignition[913]: INFO : Ignition 2.18.0 Jul 2 09:24:10.422105 ignition[913]: INFO : Stage: mount Jul 2 09:24:10.423639 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:24:10.423639 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:24:10.423639 ignition[913]: INFO : mount: mount passed Jul 2 09:24:10.423639 ignition[913]: INFO : Ignition finished successfully Jul 2 09:24:10.424811 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:24:10.437192 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:24:10.809350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:24:10.818271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:24:10.824677 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Jul 2 09:24:10.824708 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:24:10.824718 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:24:10.826042 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:24:10.828056 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:24:10.828977 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:24:10.846189 ignition[946]: INFO : Ignition 2.18.0 Jul 2 09:24:10.846189 ignition[946]: INFO : Stage: files Jul 2 09:24:10.847511 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:24:10.847511 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:24:10.847511 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:24:10.850189 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:24:10.850189 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:24:10.852187 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:24:10.852187 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:24:10.852187 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:24:10.850968 unknown[946]: wrote ssh authorized keys file for user: core Jul 2 09:24:10.855961 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 09:24:10.855961 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 09:24:10.855961 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:24:10.855961 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:24:10.873890 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 09:24:10.911919 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:24:10.913446 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 09:24:10.913446 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 09:24:11.204499 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 09:24:11.265328 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:24:11.266808 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 09:24:11.478780 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 09:24:11.632212 systemd-networkd[766]: eth0: Gained IPv6LL Jul 2 09:24:11.660015 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 09:24:11.660015 ignition[946]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 09:24:11.662649 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 2 09:24:11.664211 ignition[946]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 09:24:11.695303 ignition[946]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:24:11.700064 ignition[946]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:24:11.701161 ignition[946]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 09:24:11.701161 ignition[946]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:24:11.701161 ignition[946]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:24:11.701161 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:24:11.701161 ignition[946]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:24:11.701161 ignition[946]: INFO : files: files passed Jul 2 09:24:11.701161 ignition[946]: INFO : Ignition finished successfully Jul 2 09:24:11.705425 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:24:11.718176 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:24:11.719811 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:24:11.722568 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:24:11.723376 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:24:11.727649 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 09:24:11.731301 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:24:11.732788 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:24:11.732788 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:24:11.733866 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:24:11.734917 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:24:11.748247 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:24:11.769802 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:24:11.769902 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:24:11.771015 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:24:11.773135 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:24:11.773970 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:24:11.779567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:24:11.791327 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:24:11.804218 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:24:11.812195 systemd[1]: Stopped target network.target - Network. Jul 2 09:24:11.813893 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:24:11.814995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:24:11.817215 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:24:11.818606 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:24:11.818732 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:24:11.820873 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:24:11.821702 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:24:11.823372 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:24:11.824967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:24:11.826450 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:24:11.827985 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:24:11.829710 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:24:11.831383 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:24:11.832925 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:24:11.834613 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:24:11.835975 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:24:11.836124 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:24:11.838167 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:24:11.839887 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:24:11.841370 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:24:11.842152 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:24:11.843104 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:24:11.843228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:24:11.845485 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:24:11.845600 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:24:11.847330 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:24:11.849988 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:24:11.855147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:24:11.857025 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:24:11.857744 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:24:11.860642 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:24:11.860733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:24:11.861933 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:24:11.862019 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:24:11.863366 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:24:11.863479 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:24:11.864928 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:24:11.865093 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:24:11.875225 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:24:11.879222 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:24:11.881095 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:24:11.883124 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:24:11.885084 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:24:11.885212 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:24:11.886220 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:24:11.886314 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:24:11.893451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:24:11.893550 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:24:11.897098 systemd-networkd[766]: eth0: DHCPv6 lease lost Jul 2 09:24:11.900698 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:24:11.902198 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:24:11.904410 ignition[1002]: INFO : Ignition 2.18.0 Jul 2 09:24:11.904410 ignition[1002]: INFO : Stage: umount Jul 2 09:24:11.904410 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:24:11.904410 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:24:11.908952 ignition[1002]: INFO : umount: umount passed Jul 2 09:24:11.908952 ignition[1002]: INFO : Ignition finished successfully Jul 2 09:24:11.904630 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:24:11.905275 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:24:11.905405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:24:11.909510 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:24:11.909639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:24:11.911146 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:24:11.911184 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:24:11.913187 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:24:11.913243 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:24:11.914079 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:24:11.914124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:24:11.914955 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:24:11.914998 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:24:11.917191 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:24:11.917245 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:24:11.934183 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:24:11.934917 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:24:11.934988 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:24:11.936872 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:24:11.936924 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:24:11.938308 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:24:11.938347 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:24:11.940314 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:24:11.940361 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:24:11.941977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:24:11.943990 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:24:11.944108 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:24:11.949424 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:24:11.949477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:24:11.957708 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:24:11.957870 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:24:11.959547 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:24:11.959690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:24:11.961599 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:24:11.961672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:24:11.965705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:24:11.965758 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:24:11.969825 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:24:11.969878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:24:11.973209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:24:11.973257 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:24:11.975693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:24:11.975758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:24:11.997188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:24:11.998215 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:24:11.998277 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:24:12.000253 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 09:24:12.000299 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:24:12.002165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:24:12.002209 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:24:12.004263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:24:12.004307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:24:12.006437 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:24:12.006519 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:24:12.008699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:24:12.010684 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:24:12.021206 systemd[1]: Switching root. Jul 2 09:24:12.049757 systemd-journald[238]: Journal stopped Jul 2 09:24:12.814878 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 2 09:24:12.814938 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:24:12.814951 kernel: SELinux: policy capability open_perms=1 Jul 2 09:24:12.814961 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:24:12.814971 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:24:12.814980 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:24:12.814994 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:24:12.815003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:24:12.815016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:24:12.815026 kernel: audit: type=1403 audit(1719912252.259:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 09:24:12.815075 systemd[1]: Successfully loaded SELinux policy in 34.746ms. Jul 2 09:24:12.815098 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.539ms. Jul 2 09:24:12.815111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:24:12.815122 systemd[1]: Detected virtualization kvm. Jul 2 09:24:12.815136 systemd[1]: Detected architecture arm64. Jul 2 09:24:12.815147 systemd[1]: Detected first boot. Jul 2 09:24:12.815158 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:24:12.815168 zram_generator::config[1063]: No configuration found. Jul 2 09:24:12.815182 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:24:12.815192 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:24:12.815203 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 09:24:12.815214 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 09:24:12.815225 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 09:24:12.815236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 09:24:12.815250 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 09:24:12.815262 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 09:24:12.815273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 09:24:12.815285 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 09:24:12.815295 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 09:24:12.815308 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:24:12.815319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:24:12.815329 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 09:24:12.815340 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 09:24:12.815351 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 09:24:12.815361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:24:12.815372 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 09:24:12.815383 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:24:12.815394 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 09:24:12.815405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:24:12.815415 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:24:12.815426 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:24:12.815436 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:24:12.815447 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 09:24:12.815457 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 09:24:12.815469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:24:12.815480 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:24:12.815490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:24:12.815501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:24:12.815511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:24:12.815522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 09:24:12.815533 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 09:24:12.815549 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 09:24:12.815560 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 09:24:12.815572 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 09:24:12.815583 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 09:24:12.815593 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 09:24:12.815603 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 09:24:12.815614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:24:12.815624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:24:12.815635 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 09:24:12.815645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:24:12.815656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:24:12.815668 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:24:12.815678 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 09:24:12.815688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:24:12.815699 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:24:12.815710 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 09:24:12.815720 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 09:24:12.815731 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:24:12.815774 kernel: fuse: init (API version 7.39) Jul 2 09:24:12.815788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:24:12.815801 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 09:24:12.815811 kernel: ACPI: bus type drm_connector registered Jul 2 09:24:12.815821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 09:24:12.815831 kernel: loop: module loaded Jul 2 09:24:12.815861 systemd-journald[1137]: Collecting audit messages is disabled. Jul 2 09:24:12.815883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:24:12.815894 systemd-journald[1137]: Journal started Jul 2 09:24:12.815917 systemd-journald[1137]: Runtime Journal (/run/log/journal/de669de41b8e4489b9b08a04275981dd) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:24:12.823204 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:24:12.824218 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 09:24:12.825329 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 09:24:12.826661 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 09:24:12.827723 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 09:24:12.828949 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 09:24:12.829975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 09:24:12.831229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:24:12.832542 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:24:12.832714 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 09:24:12.834091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:24:12.834252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:24:12.835667 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:24:12.835852 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:24:12.836992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:24:12.837168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:24:12.838389 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:24:12.838555 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 09:24:12.840179 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:24:12.840399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:24:12.841832 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:24:12.843271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 09:24:12.845064 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 09:24:12.846552 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 09:24:12.859290 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 09:24:12.867151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 09:24:12.869312 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 09:24:12.870372 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:24:12.872629 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 09:24:12.878278 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 09:24:12.879476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:24:12.881325 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 09:24:12.884177 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:24:12.887808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:24:12.889460 systemd-journald[1137]: Time spent on flushing to /var/log/journal/de669de41b8e4489b9b08a04275981dd is 17.147ms for 847 entries. Jul 2 09:24:12.889460 systemd-journald[1137]: System Journal (/var/log/journal/de669de41b8e4489b9b08a04275981dd) is 8.0M, max 195.6M, 187.6M free. Jul 2 09:24:12.927313 systemd-journald[1137]: Received client request to flush runtime journal. Jul 2 09:24:12.890823 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:24:12.893241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:24:12.894463 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 09:24:12.895599 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 09:24:12.901321 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 09:24:12.910282 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 09:24:12.911303 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 09:24:12.921209 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 2 09:24:12.921221 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 2 09:24:12.923115 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:24:12.925517 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:24:12.932212 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 09:24:12.933658 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 09:24:12.939378 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 09:24:12.963239 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 09:24:12.973279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:24:12.986686 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 2 09:24:12.987070 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 2 09:24:12.991445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:24:13.354268 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 09:24:13.367220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:24:13.389387 systemd-udevd[1232]: Using default interface naming scheme 'v255'. Jul 2 09:24:13.407149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:24:13.420231 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:24:13.438058 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1247) Jul 2 09:24:13.449313 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 09:24:13.453523 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 2 09:24:13.469099 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1234) Jul 2 09:24:13.496983 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:24:13.513339 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 09:24:13.563835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:24:13.571443 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 09:24:13.576730 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 09:24:13.594653 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:24:13.599981 systemd-networkd[1242]: lo: Link UP Jul 2 09:24:13.599989 systemd-networkd[1242]: lo: Gained carrier Jul 2 09:24:13.603779 systemd-networkd[1242]: Enumeration completed Jul 2 09:24:13.603971 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:24:13.606866 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:24:13.606877 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:24:13.607536 systemd-networkd[1242]: eth0: Link UP Jul 2 09:24:13.607549 systemd-networkd[1242]: eth0: Gained carrier Jul 2 09:24:13.607562 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:24:13.613236 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 09:24:13.621364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:24:13.625770 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 09:24:13.627154 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:24:13.629761 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 09:24:13.630183 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:24:13.638137 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:24:13.676671 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 09:24:13.678176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:24:13.679397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:24:13.679433 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:24:13.680409 systemd[1]: Reached target machines.target - Containers. Jul 2 09:24:13.682273 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 09:24:13.698228 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 09:24:13.700484 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 09:24:13.701382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:24:13.703666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 09:24:13.705800 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 09:24:13.709328 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 09:24:13.711231 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 09:24:13.720817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 09:24:13.723300 kernel: loop0: detected capacity change from 0 to 59672 Jul 2 09:24:13.724482 kernel: block loop0: the capability attribute has been deprecated. Jul 2 09:24:13.736465 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:24:13.737323 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 09:24:13.739067 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:24:13.781072 kernel: loop1: detected capacity change from 0 to 113672 Jul 2 09:24:13.825060 kernel: loop2: detected capacity change from 0 to 193208 Jul 2 09:24:13.868060 kernel: loop3: detected capacity change from 0 to 59672 Jul 2 09:24:13.873061 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 09:24:13.877072 kernel: loop5: detected capacity change from 0 to 193208 Jul 2 09:24:13.880724 (sd-merge)[1300]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 09:24:13.881155 (sd-merge)[1300]: Merged extensions into '/usr'. Jul 2 09:24:13.884541 systemd[1]: Reloading requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 09:24:13.884557 systemd[1]: Reloading... Jul 2 09:24:13.931067 zram_generator::config[1324]: No configuration found. Jul 2 09:24:13.948926 ldconfig[1283]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:24:14.032360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:24:14.077059 systemd[1]: Reloading finished in 192 ms. Jul 2 09:24:14.092095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 09:24:14.093275 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 09:24:14.108228 systemd[1]: Starting ensure-sysext.service... Jul 2 09:24:14.110086 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:24:14.115510 systemd[1]: Reloading requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Jul 2 09:24:14.115525 systemd[1]: Reloading... Jul 2 09:24:14.128288 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:24:14.128552 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 09:24:14.129207 systemd-tmpfiles[1368]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:24:14.129426 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jul 2 09:24:14.129478 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jul 2 09:24:14.131606 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:24:14.131621 systemd-tmpfiles[1368]: Skipping /boot Jul 2 09:24:14.138188 systemd-tmpfiles[1368]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:24:14.138205 systemd-tmpfiles[1368]: Skipping /boot Jul 2 09:24:14.155077 zram_generator::config[1395]: No configuration found. Jul 2 09:24:14.245663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:24:14.290390 systemd[1]: Reloading finished in 174 ms. Jul 2 09:24:14.306058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:24:14.323353 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:24:14.325940 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 09:24:14.328159 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 09:24:14.332324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:24:14.336380 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 09:24:14.342885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:24:14.345392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:24:14.349455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:24:14.352359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:24:14.355315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:24:14.356257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:24:14.356462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:24:14.362693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:24:14.362903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:24:14.365983 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 09:24:14.371816 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:24:14.372065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:24:14.376560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:24:14.388331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:24:14.391293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:24:14.394401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:24:14.396207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:24:14.397820 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 09:24:14.402686 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 09:24:14.405449 augenrules[1478]: No rules Jul 2 09:24:14.404558 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 09:24:14.406224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:24:14.406470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:24:14.417494 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:24:14.419184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:24:14.419342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:24:14.420763 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:24:14.420973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:24:14.424852 systemd-resolved[1441]: Positive Trust Anchors: Jul 2 09:24:14.424872 systemd-resolved[1441]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:24:14.424903 systemd-resolved[1441]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:24:14.426327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:24:14.426510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:24:14.426604 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:24:14.429395 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 09:24:14.430798 systemd-resolved[1441]: Defaulting to hostname 'linux'. Jul 2 09:24:14.431823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:24:14.446358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:24:14.448719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:24:14.450888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:24:14.453361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:24:14.454529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:24:14.454861 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:24:14.455644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:24:14.457440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:24:14.457844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:24:14.459353 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:24:14.459505 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:24:14.460965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:24:14.461136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:24:14.462516 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:24:14.462720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:24:14.465642 systemd[1]: Finished ensure-sysext.service. Jul 2 09:24:14.469752 systemd[1]: Reached target network.target - Network. Jul 2 09:24:14.470713 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:24:14.471595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:24:14.471643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:24:14.483247 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 09:24:14.525105 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 09:24:14.525943 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 09:24:14.525995 systemd-timesyncd[1519]: Initial clock synchronization to Tue 2024-07-02 09:24:14.369043 UTC. Jul 2 09:24:14.526553 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:24:14.527447 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 09:24:14.528593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 09:24:14.529726 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 09:24:14.530812 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:24:14.530852 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:24:14.531572 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 09:24:14.532642 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 09:24:14.533606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 09:24:14.534664 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:24:14.536153 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 09:24:14.538511 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 09:24:14.540578 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 09:24:14.546175 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 09:24:14.546975 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:24:14.547686 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:24:14.548552 systemd[1]: System is tainted: cgroupsv1 Jul 2 09:24:14.548600 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:24:14.548621 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:24:14.549912 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 09:24:14.551902 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 09:24:14.553772 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 09:24:14.558244 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 09:24:14.559029 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 09:24:14.565305 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 09:24:14.570091 jq[1525]: false Jul 2 09:24:14.570888 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 09:24:14.573013 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 09:24:14.578208 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 09:24:14.584240 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 09:24:14.587933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 09:24:14.589128 extend-filesystems[1526]: Found loop3 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found loop4 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found loop5 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda1 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda2 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda3 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found usr Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda4 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda6 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda7 Jul 2 09:24:14.589976 extend-filesystems[1526]: Found vda9 Jul 2 09:24:14.589976 extend-filesystems[1526]: Checking size of /dev/vda9 Jul 2 09:24:14.633916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1234) Jul 2 09:24:14.634195 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 09:24:14.592216 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 09:24:14.591413 dbus-daemon[1524]: [system] SELinux support is enabled Jul 2 09:24:14.658098 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 09:24:14.658194 extend-filesystems[1526]: Resized partition /dev/vda9 Jul 2 09:24:14.601652 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 09:24:14.659901 extend-filesystems[1552]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 09:24:14.659901 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 09:24:14.659901 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 09:24:14.659901 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 09:24:14.608153 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 09:24:14.668780 update_engine[1544]: I0702 09:24:14.658208 1544 main.cc:92] Flatcar Update Engine starting Jul 2 09:24:14.668780 update_engine[1544]: I0702 09:24:14.666246 1544 update_check_scheduler.cc:74] Next update check in 5m30s Jul 2 09:24:14.669131 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Jul 2 09:24:14.617521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:24:14.670417 jq[1551]: true Jul 2 09:24:14.617866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 09:24:14.618201 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:24:14.618409 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 09:24:14.670905 jq[1557]: true Jul 2 09:24:14.623180 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:24:14.623411 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 09:24:14.663028 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:24:14.663322 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 09:24:14.663601 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 09:24:14.680518 tar[1555]: linux-arm64/helm Jul 2 09:24:14.683938 systemd[1]: Started update-engine.service - Update Engine. Jul 2 09:24:14.686516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:24:14.686558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 09:24:14.689254 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:24:14.689285 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 09:24:14.690864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 09:24:14.698208 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 09:24:14.706106 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:24:14.711132 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 09:24:14.715258 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 09:24:14.724957 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 09:24:14.728187 systemd-logind[1540]: New seat seat0. Jul 2 09:24:14.731950 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 09:24:14.741650 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:24:14.862694 containerd[1558]: time="2024-07-02T09:24:14.862607200Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 09:24:14.890742 containerd[1558]: time="2024-07-02T09:24:14.890692040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 09:24:14.891026 containerd[1558]: time="2024-07-02T09:24:14.890801560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892082080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892112280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892342640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892360080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892428400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892472560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892484160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892534320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892695680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892712400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:24:14.893067 containerd[1558]: time="2024-07-02T09:24:14.892721440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893297 containerd[1558]: time="2024-07-02T09:24:14.892847520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:24:14.893297 containerd[1558]: time="2024-07-02T09:24:14.892862040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:24:14.893297 containerd[1558]: time="2024-07-02T09:24:14.892909800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:24:14.893297 containerd[1558]: time="2024-07-02T09:24:14.892922000Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:24:14.896566 containerd[1558]: time="2024-07-02T09:24:14.896540040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:24:14.896683 containerd[1558]: time="2024-07-02T09:24:14.896667120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:24:14.896747 containerd[1558]: time="2024-07-02T09:24:14.896724280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:24:14.896820 containerd[1558]: time="2024-07-02T09:24:14.896806880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 09:24:14.896887 containerd[1558]: time="2024-07-02T09:24:14.896874920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 09:24:14.896948 containerd[1558]: time="2024-07-02T09:24:14.896934760Z" level=info msg="NRI interface is disabled by configuration." Jul 2 09:24:14.896997 containerd[1558]: time="2024-07-02T09:24:14.896985760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:24:14.897189 containerd[1558]: time="2024-07-02T09:24:14.897168720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 09:24:14.897257 containerd[1558]: time="2024-07-02T09:24:14.897241160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 09:24:14.897311 containerd[1558]: time="2024-07-02T09:24:14.897297600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 09:24:14.897363 containerd[1558]: time="2024-07-02T09:24:14.897351000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 09:24:14.897442 containerd[1558]: time="2024-07-02T09:24:14.897426640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897501 containerd[1558]: time="2024-07-02T09:24:14.897487840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897571 containerd[1558]: time="2024-07-02T09:24:14.897554480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897649 containerd[1558]: time="2024-07-02T09:24:14.897634520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897704 containerd[1558]: time="2024-07-02T09:24:14.897691960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897768 containerd[1558]: time="2024-07-02T09:24:14.897755520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897817 containerd[1558]: time="2024-07-02T09:24:14.897805480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.897865 containerd[1558]: time="2024-07-02T09:24:14.897853800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:24:14.898022 containerd[1558]: time="2024-07-02T09:24:14.898001800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:24:14.898429 containerd[1558]: time="2024-07-02T09:24:14.898410520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:24:14.898533 containerd[1558]: time="2024-07-02T09:24:14.898515560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.898595 containerd[1558]: time="2024-07-02T09:24:14.898581800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 09:24:14.898666 containerd[1558]: time="2024-07-02T09:24:14.898652120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:24:14.898853 containerd[1558]: time="2024-07-02T09:24:14.898837800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.898978 containerd[1558]: time="2024-07-02T09:24:14.898962680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.899055 containerd[1558]: time="2024-07-02T09:24:14.899023760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.899115 containerd[1558]: time="2024-07-02T09:24:14.899101360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.899167 containerd[1558]: time="2024-07-02T09:24:14.899155240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899205560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899223560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899235760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899248120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899378480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899395400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899410080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899421640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899433320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899446240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899457960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900062 containerd[1558]: time="2024-07-02T09:24:14.899468080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:24:14.900303 containerd[1558]: time="2024-07-02T09:24:14.899772080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:24:14.900303 containerd[1558]: time="2024-07-02T09:24:14.899828440Z" level=info msg="Connect containerd service" Jul 2 09:24:14.900303 containerd[1558]: time="2024-07-02T09:24:14.899854760Z" level=info msg="using legacy CRI server" Jul 2 09:24:14.900303 containerd[1558]: time="2024-07-02T09:24:14.899861240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 09:24:14.900303 containerd[1558]: time="2024-07-02T09:24:14.900014400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:24:14.901333 containerd[1558]: time="2024-07-02T09:24:14.901303440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:24:14.901445 containerd[1558]: time="2024-07-02T09:24:14.901429440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:24:14.901719 containerd[1558]: time="2024-07-02T09:24:14.901700320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 09:24:14.901820 containerd[1558]: time="2024-07-02T09:24:14.901806640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:24:14.901899 containerd[1558]: time="2024-07-02T09:24:14.901884360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 09:24:14.902175 containerd[1558]: time="2024-07-02T09:24:14.901668480Z" level=info msg="Start subscribing containerd event" Jul 2 09:24:14.902252 containerd[1558]: time="2024-07-02T09:24:14.902240800Z" level=info msg="Start recovering state" Jul 2 09:24:14.902379 containerd[1558]: time="2024-07-02T09:24:14.902365320Z" level=info msg="Start event monitor" Jul 2 09:24:14.902447 containerd[1558]: time="2024-07-02T09:24:14.902427400Z" level=info msg="Start snapshots syncer" Jul 2 09:24:14.902560 containerd[1558]: time="2024-07-02T09:24:14.902545360Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:24:14.902662 containerd[1558]: time="2024-07-02T09:24:14.902648960Z" level=info msg="Start streaming server" Jul 2 09:24:14.903324 containerd[1558]: time="2024-07-02T09:24:14.903303720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:24:14.903486 containerd[1558]: time="2024-07-02T09:24:14.903430280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:24:14.903655 containerd[1558]: time="2024-07-02T09:24:14.903641560Z" level=info msg="containerd successfully booted in 0.042124s" Jul 2 09:24:14.903727 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 09:24:15.035995 tar[1555]: linux-arm64/LICENSE Jul 2 09:24:15.036099 tar[1555]: linux-arm64/README.md Jul 2 09:24:15.048316 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 09:24:15.132860 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:24:15.151207 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 09:24:15.163330 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 09:24:15.168587 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:24:15.168811 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 09:24:15.171185 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 09:24:15.183438 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 09:24:15.185842 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 09:24:15.187755 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 09:24:15.188793 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 09:24:15.212181 systemd-networkd[1242]: eth0: Gained IPv6LL Jul 2 09:24:15.214366 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 09:24:15.215802 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 09:24:15.228336 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 09:24:15.230426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:15.232287 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 09:24:15.247151 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 09:24:15.247386 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 09:24:15.249819 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 09:24:15.255956 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 09:24:15.696917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:15.698269 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 09:24:15.700857 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:24:15.703137 systemd[1]: Startup finished in 5.106s (kernel) + 3.479s (userspace) = 8.586s. Jul 2 09:24:16.172831 kubelet[1660]: E0702 09:24:16.172686 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:24:16.175603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:24:16.175808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:24:20.941823 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 09:24:20.955256 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Jul 2 09:24:21.001744 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.003244 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.010435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 09:24:21.020262 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 09:24:21.022135 systemd-logind[1540]: New session 1 of user core. Jul 2 09:24:21.029491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 09:24:21.031608 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 09:24:21.038122 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.117338 systemd[1680]: Queued start job for default target default.target. Jul 2 09:24:21.117680 systemd[1680]: Created slice app.slice - User Application Slice. Jul 2 09:24:21.117710 systemd[1680]: Reached target paths.target - Paths. Jul 2 09:24:21.117721 systemd[1680]: Reached target timers.target - Timers. Jul 2 09:24:21.128116 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 09:24:21.133304 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 09:24:21.133358 systemd[1680]: Reached target sockets.target - Sockets. Jul 2 09:24:21.133369 systemd[1680]: Reached target basic.target - Basic System. Jul 2 09:24:21.133404 systemd[1680]: Reached target default.target - Main User Target. Jul 2 09:24:21.133427 systemd[1680]: Startup finished in 90ms. Jul 2 09:24:21.133784 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 09:24:21.135136 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 09:24:21.198626 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:54044.service - OpenSSH per-connection server daemon (10.0.0.1:54044). Jul 2 09:24:21.228125 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 54044 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.229291 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.233136 systemd-logind[1540]: New session 2 of user core. Jul 2 09:24:21.242410 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 09:24:21.293151 sshd[1692]: pam_unix(sshd:session): session closed for user core Jul 2 09:24:21.300242 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:54058.service - OpenSSH per-connection server daemon (10.0.0.1:54058). Jul 2 09:24:21.300584 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:54044.service: Deactivated successfully. Jul 2 09:24:21.302408 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Jul 2 09:24:21.302965 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 09:24:21.304426 systemd-logind[1540]: Removed session 2. Jul 2 09:24:21.329416 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 54058 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.330563 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.334500 systemd-logind[1540]: New session 3 of user core. Jul 2 09:24:21.340264 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 09:24:21.386774 sshd[1697]: pam_unix(sshd:session): session closed for user core Jul 2 09:24:21.398342 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:54064.service - OpenSSH per-connection server daemon (10.0.0.1:54064). Jul 2 09:24:21.399046 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:54058.service: Deactivated successfully. Jul 2 09:24:21.400482 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 09:24:21.401786 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Jul 2 09:24:21.402876 systemd-logind[1540]: Removed session 3. Jul 2 09:24:21.427192 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 54064 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.428254 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.431872 systemd-logind[1540]: New session 4 of user core. Jul 2 09:24:21.443259 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 09:24:21.493708 sshd[1705]: pam_unix(sshd:session): session closed for user core Jul 2 09:24:21.504301 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:54078.service - OpenSSH per-connection server daemon (10.0.0.1:54078). Jul 2 09:24:21.504689 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:54064.service: Deactivated successfully. Jul 2 09:24:21.506447 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:24:21.506975 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:24:21.508174 systemd-logind[1540]: Removed session 4. Jul 2 09:24:21.532883 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 54078 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.533910 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.538190 systemd-logind[1540]: New session 5 of user core. Jul 2 09:24:21.548424 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 09:24:21.611609 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 09:24:21.611834 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:24:21.626722 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 2 09:24:21.628252 sshd[1713]: pam_unix(sshd:session): session closed for user core Jul 2 09:24:21.635299 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:54086.service - OpenSSH per-connection server daemon (10.0.0.1:54086). Jul 2 09:24:21.635641 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:54078.service: Deactivated successfully. Jul 2 09:24:21.637399 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:24:21.637936 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:24:21.639292 systemd-logind[1540]: Removed session 5. Jul 2 09:24:21.664433 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 54086 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.665484 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.669142 systemd-logind[1540]: New session 6 of user core. Jul 2 09:24:21.681316 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 09:24:21.730044 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 09:24:21.730267 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:24:21.733407 sudo[1730]: pam_unix(sudo:session): session closed for user root Jul 2 09:24:21.737498 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 09:24:21.737724 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:24:21.752418 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 09:24:21.753960 auditctl[1733]: No rules Jul 2 09:24:21.754720 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 09:24:21.754940 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 09:24:21.756591 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:24:21.778608 augenrules[1752]: No rules Jul 2 09:24:21.779263 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:24:21.780368 sudo[1729]: pam_unix(sudo:session): session closed for user root Jul 2 09:24:21.782223 sshd[1722]: pam_unix(sshd:session): session closed for user core Jul 2 09:24:21.792275 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:54100.service - OpenSSH per-connection server daemon (10.0.0.1:54100). Jul 2 09:24:21.792725 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:54086.service: Deactivated successfully. Jul 2 09:24:21.794504 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:24:21.795549 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:24:21.796736 systemd-logind[1540]: Removed session 6. Jul 2 09:24:21.821147 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 54100 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:24:21.822242 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:24:21.825920 systemd-logind[1540]: New session 7 of user core. Jul 2 09:24:21.838317 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 09:24:21.888338 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:24:21.888566 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:24:21.989249 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 09:24:21.989413 (dockerd)[1775]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 09:24:22.232177 dockerd[1775]: time="2024-07-02T09:24:22.232117019Z" level=info msg="Starting up" Jul 2 09:24:22.423705 dockerd[1775]: time="2024-07-02T09:24:22.423661000Z" level=info msg="Loading containers: start." Jul 2 09:24:22.496061 kernel: Initializing XFRM netlink socket Jul 2 09:24:22.566830 systemd-networkd[1242]: docker0: Link UP Jul 2 09:24:22.575394 dockerd[1775]: time="2024-07-02T09:24:22.575339302Z" level=info msg="Loading containers: done." Jul 2 09:24:22.629847 dockerd[1775]: time="2024-07-02T09:24:22.629792187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 09:24:22.630063 dockerd[1775]: time="2024-07-02T09:24:22.630016725Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 09:24:22.630177 dockerd[1775]: time="2024-07-02T09:24:22.630149217Z" level=info msg="Daemon has completed initialization" Jul 2 09:24:22.657356 dockerd[1775]: time="2024-07-02T09:24:22.657304532Z" level=info msg="API listen on /run/docker.sock" Jul 2 09:24:22.657522 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 09:24:23.219569 containerd[1558]: time="2024-07-02T09:24:23.219512606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 09:24:23.887921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount431544972.mount: Deactivated successfully. Jul 2 09:24:24.850653 containerd[1558]: time="2024-07-02T09:24:24.850601831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:24.851743 containerd[1558]: time="2024-07-02T09:24:24.851674671Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jul 2 09:24:24.852303 containerd[1558]: time="2024-07-02T09:24:24.852257098Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:24.855484 containerd[1558]: time="2024-07-02T09:24:24.855421658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:24.856673 containerd[1558]: time="2024-07-02T09:24:24.856635622Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 1.63705916s" Jul 2 09:24:24.856711 containerd[1558]: time="2024-07-02T09:24:24.856679760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 09:24:24.877842 containerd[1558]: time="2024-07-02T09:24:24.877807391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 09:24:26.140994 containerd[1558]: time="2024-07-02T09:24:26.140222053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:26.140994 containerd[1558]: time="2024-07-02T09:24:26.140932751Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jul 2 09:24:26.141993 containerd[1558]: time="2024-07-02T09:24:26.141963320Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:26.144681 containerd[1558]: time="2024-07-02T09:24:26.144650142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:26.146141 containerd[1558]: time="2024-07-02T09:24:26.146100488Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.268252326s" Jul 2 09:24:26.146141 containerd[1558]: time="2024-07-02T09:24:26.146136764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 09:24:26.165179 containerd[1558]: time="2024-07-02T09:24:26.165148035Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 09:24:26.367364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 09:24:26.377188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:26.466643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:26.470577 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:24:26.510397 kubelet[1999]: E0702 09:24:26.510349 1999 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:24:26.514494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:24:26.514680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:24:28.324078 containerd[1558]: time="2024-07-02T09:24:28.324014320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:28.325602 containerd[1558]: time="2024-07-02T09:24:28.325548721Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jul 2 09:24:28.326234 containerd[1558]: time="2024-07-02T09:24:28.326205126Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:28.329056 containerd[1558]: time="2024-07-02T09:24:28.328996170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:28.330621 containerd[1558]: time="2024-07-02T09:24:28.330574539Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 2.16538552s" Jul 2 09:24:28.330621 containerd[1558]: time="2024-07-02T09:24:28.330616075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 09:24:28.348379 containerd[1558]: time="2024-07-02T09:24:28.348343303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 09:24:29.321802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528451407.mount: Deactivated successfully. Jul 2 09:24:29.624620 containerd[1558]: time="2024-07-02T09:24:29.624555426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:29.625393 containerd[1558]: time="2024-07-02T09:24:29.625355999Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jul 2 09:24:29.626347 containerd[1558]: time="2024-07-02T09:24:29.626311143Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:29.628828 containerd[1558]: time="2024-07-02T09:24:29.628745283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:29.630332 containerd[1558]: time="2024-07-02T09:24:29.629606751Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.281225575s" Jul 2 09:24:29.630332 containerd[1558]: time="2024-07-02T09:24:29.629648106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 09:24:29.649550 containerd[1558]: time="2024-07-02T09:24:29.649499638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 09:24:30.103719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009157634.mount: Deactivated successfully. Jul 2 09:24:30.110243 containerd[1558]: time="2024-07-02T09:24:30.110199340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:30.111064 containerd[1558]: time="2024-07-02T09:24:30.111028541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 09:24:30.112195 containerd[1558]: time="2024-07-02T09:24:30.112153078Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:30.114258 containerd[1558]: time="2024-07-02T09:24:30.114212258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:30.114994 containerd[1558]: time="2024-07-02T09:24:30.114960952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 465.420316ms" Jul 2 09:24:30.115066 containerd[1558]: time="2024-07-02T09:24:30.114993267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 09:24:30.134739 containerd[1558]: time="2024-07-02T09:24:30.134665458Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 09:24:30.677497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067126732.mount: Deactivated successfully. Jul 2 09:24:31.962433 containerd[1558]: time="2024-07-02T09:24:31.962381282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:31.963458 containerd[1558]: time="2024-07-02T09:24:31.963316034Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 09:24:31.964253 containerd[1558]: time="2024-07-02T09:24:31.964212874Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:31.967386 containerd[1558]: time="2024-07-02T09:24:31.967325573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:31.968663 containerd[1558]: time="2024-07-02T09:24:31.968543428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.833845055s" Jul 2 09:24:31.968663 containerd[1558]: time="2024-07-02T09:24:31.968577310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 09:24:31.987547 containerd[1558]: time="2024-07-02T09:24:31.987497981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 09:24:32.494475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806997806.mount: Deactivated successfully. Jul 2 09:24:33.722161 containerd[1558]: time="2024-07-02T09:24:33.722098733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:33.722894 containerd[1558]: time="2024-07-02T09:24:33.722836384Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jul 2 09:24:33.723550 containerd[1558]: time="2024-07-02T09:24:33.723509749Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:33.726202 containerd[1558]: time="2024-07-02T09:24:33.726168948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:24:33.727406 containerd[1558]: time="2024-07-02T09:24:33.727372852Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.73983911s" Jul 2 09:24:33.727433 containerd[1558]: time="2024-07-02T09:24:33.727406112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 09:24:36.617285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 09:24:36.627215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:36.711537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:36.715604 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:24:36.754454 kubelet[2185]: E0702 09:24:36.754395 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:24:36.757314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:24:36.757492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:24:39.236670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:39.246286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:39.261458 systemd[1]: Reloading requested from client PID 2204 ('systemctl') (unit session-7.scope)... Jul 2 09:24:39.261480 systemd[1]: Reloading... Jul 2 09:24:39.321140 zram_generator::config[2244]: No configuration found. Jul 2 09:24:39.409475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:24:39.460189 systemd[1]: Reloading finished in 198 ms. Jul 2 09:24:39.498874 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 09:24:39.498949 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 09:24:39.499347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:39.501746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:39.599779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:39.604135 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:24:39.647758 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:24:39.649078 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:24:39.649078 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:24:39.649078 kubelet[2299]: I0702 09:24:39.648186 2299 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:24:40.758753 kubelet[2299]: I0702 09:24:40.758718 2299 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 09:24:40.761221 kubelet[2299]: I0702 09:24:40.759185 2299 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:24:40.761221 kubelet[2299]: I0702 09:24:40.759430 2299 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 09:24:40.783923 kubelet[2299]: I0702 09:24:40.783883 2299 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:24:40.785514 kubelet[2299]: E0702 09:24:40.785492 2299 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.794752 kubelet[2299]: W0702 09:24:40.794718 2299 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 09:24:40.795541 kubelet[2299]: I0702 09:24:40.795518 2299 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:24:40.795862 kubelet[2299]: I0702 09:24:40.795837 2299 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.796012 2299 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.796059 2299 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.796069 2299 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.796235 2299 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.797343 2299 kubelet.go:393] "Attempting to sync node with API server" Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.797364 2299 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:24:40.797958 kubelet[2299]: I0702 09:24:40.797453 2299 kubelet.go:309] "Adding apiserver pod source" Jul 2 09:24:40.798233 kubelet[2299]: I0702 09:24:40.797463 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:24:40.800104 kubelet[2299]: W0702 09:24:40.799705 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.800104 kubelet[2299]: E0702 09:24:40.799758 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.800104 kubelet[2299]: W0702 09:24:40.800028 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.800104 kubelet[2299]: E0702 09:24:40.800085 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.803117 kubelet[2299]: I0702 09:24:40.803092 2299 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:24:40.806247 kubelet[2299]: W0702 09:24:40.806223 2299 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:24:40.807080 kubelet[2299]: I0702 09:24:40.807012 2299 server.go:1232] "Started kubelet" Jul 2 09:24:40.807145 kubelet[2299]: I0702 09:24:40.807090 2299 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:24:40.807879 kubelet[2299]: I0702 09:24:40.807191 2299 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 09:24:40.807879 kubelet[2299]: I0702 09:24:40.807441 2299 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:24:40.807879 kubelet[2299]: I0702 09:24:40.807833 2299 server.go:462] "Adding debug handlers to kubelet server" Jul 2 09:24:40.810615 kubelet[2299]: E0702 09:24:40.810594 2299 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 09:24:40.810738 kubelet[2299]: E0702 09:24:40.810723 2299 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:24:40.811790 kubelet[2299]: E0702 09:24:40.811677 2299 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de5b1b5e551567", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 9, 24, 40, 806987111, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 9, 24, 40, 806987111, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.144:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.144:6443: connect: connection refused'(may retry after sleeping) Jul 2 09:24:40.814020 kubelet[2299]: I0702 09:24:40.812072 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:24:40.814020 kubelet[2299]: I0702 09:24:40.813838 2299 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:24:40.814294 kubelet[2299]: I0702 09:24:40.814262 2299 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:24:40.814361 kubelet[2299]: I0702 09:24:40.814344 2299 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:24:40.816060 kubelet[2299]: W0702 09:24:40.816007 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.816193 kubelet[2299]: E0702 09:24:40.816180 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.816248 kubelet[2299]: E0702 09:24:40.816216 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Jul 2 09:24:40.831962 kubelet[2299]: I0702 09:24:40.831200 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:24:40.832166 kubelet[2299]: I0702 09:24:40.832133 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:24:40.832166 kubelet[2299]: I0702 09:24:40.832167 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:24:40.832234 kubelet[2299]: I0702 09:24:40.832187 2299 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 09:24:40.832256 kubelet[2299]: E0702 09:24:40.832236 2299 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:24:40.833058 kubelet[2299]: W0702 09:24:40.832850 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.833213 kubelet[2299]: E0702 09:24:40.833196 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:40.850461 kubelet[2299]: I0702 09:24:40.850440 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:24:40.850565 kubelet[2299]: I0702 09:24:40.850554 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:24:40.850628 kubelet[2299]: I0702 09:24:40.850619 2299 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:24:40.916260 kubelet[2299]: I0702 09:24:40.916186 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:24:40.916636 kubelet[2299]: E0702 09:24:40.916619 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jul 2 09:24:40.931258 kubelet[2299]: I0702 09:24:40.931236 2299 policy_none.go:49] "None policy: Start" Jul 2 09:24:40.932708 kubelet[2299]: I0702 09:24:40.932101 2299 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 09:24:40.932708 kubelet[2299]: I0702 09:24:40.932147 2299 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:24:40.932843 kubelet[2299]: E0702 09:24:40.932762 2299 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 09:24:40.938025 kubelet[2299]: I0702 09:24:40.936992 2299 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:24:40.938025 kubelet[2299]: I0702 09:24:40.937272 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:24:40.938549 kubelet[2299]: E0702 09:24:40.938519 2299 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 09:24:41.017541 kubelet[2299]: E0702 09:24:41.017457 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Jul 2 09:24:41.118048 kubelet[2299]: I0702 09:24:41.117696 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:24:41.118123 kubelet[2299]: E0702 09:24:41.118099 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jul 2 09:24:41.133426 kubelet[2299]: I0702 09:24:41.133358 2299 topology_manager.go:215] "Topology Admit Handler" podUID="2dd00b1296751efebad85b9a339853d5" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:24:41.135138 kubelet[2299]: I0702 09:24:41.134228 2299 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:24:41.135418 kubelet[2299]: I0702 09:24:41.135369 2299 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:24:41.217606 kubelet[2299]: I0702 09:24:41.217544 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dd00b1296751efebad85b9a339853d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2dd00b1296751efebad85b9a339853d5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:41.217606 kubelet[2299]: I0702 09:24:41.217585 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dd00b1296751efebad85b9a339853d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2dd00b1296751efebad85b9a339853d5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:41.217606 kubelet[2299]: I0702 09:24:41.217606 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:41.217778 kubelet[2299]: I0702 09:24:41.217627 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:41.217778 kubelet[2299]: I0702 09:24:41.217663 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:24:41.217778 kubelet[2299]: I0702 09:24:41.217683 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dd00b1296751efebad85b9a339853d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2dd00b1296751efebad85b9a339853d5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:41.217778 kubelet[2299]: I0702 09:24:41.217701 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:41.217778 kubelet[2299]: I0702 09:24:41.217721 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:41.217887 kubelet[2299]: I0702 09:24:41.217740 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:41.418549 kubelet[2299]: E0702 09:24:41.418500 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Jul 2 09:24:41.440469 kubelet[2299]: E0702 09:24:41.440224 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:41.440469 kubelet[2299]: E0702 09:24:41.440261 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:41.440754 kubelet[2299]: E0702 09:24:41.440234 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:41.441042 containerd[1558]: time="2024-07-02T09:24:41.440998315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 09:24:41.441651 containerd[1558]: time="2024-07-02T09:24:41.441384359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2dd00b1296751efebad85b9a339853d5,Namespace:kube-system,Attempt:0,}" Jul 2 09:24:41.441651 containerd[1558]: time="2024-07-02T09:24:41.441480701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 09:24:41.519955 kubelet[2299]: I0702 09:24:41.519890 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:24:41.521133 kubelet[2299]: E0702 09:24:41.520199 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jul 2 09:24:41.800266 kubelet[2299]: W0702 09:24:41.800141 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:41.800266 kubelet[2299]: E0702 09:24:41.800203 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:41.940510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895019716.mount: Deactivated successfully. Jul 2 09:24:41.944960 containerd[1558]: time="2024-07-02T09:24:41.944462525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:24:41.947371 containerd[1558]: time="2024-07-02T09:24:41.947065419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:24:41.948888 containerd[1558]: time="2024-07-02T09:24:41.948844135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 09:24:41.950110 containerd[1558]: time="2024-07-02T09:24:41.950074546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:24:41.951386 containerd[1558]: time="2024-07-02T09:24:41.951239396Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:24:41.952333 containerd[1558]: time="2024-07-02T09:24:41.952306426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:24:41.953086 containerd[1558]: time="2024-07-02T09:24:41.953013156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:24:41.954481 containerd[1558]: time="2024-07-02T09:24:41.954446882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:24:41.956864 containerd[1558]: time="2024-07-02T09:24:41.956676524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.566758ms" Jul 2 09:24:41.958278 containerd[1558]: time="2024-07-02T09:24:41.958121684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.560871ms" Jul 2 09:24:41.964965 containerd[1558]: time="2024-07-02T09:24:41.964473254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.025893ms" Jul 2 09:24:42.141619 kubelet[2299]: W0702 09:24:42.140108 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:42.141619 kubelet[2299]: E0702 09:24:42.140168 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:42.183826 kubelet[2299]: W0702 09:24:42.183735 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:42.183826 kubelet[2299]: E0702 09:24:42.183802 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:42.196796 containerd[1558]: time="2024-07-02T09:24:42.191547596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:24:42.196796 containerd[1558]: time="2024-07-02T09:24:42.196497038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:24:42.196796 containerd[1558]: time="2024-07-02T09:24:42.196512390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:24:42.196796 containerd[1558]: time="2024-07-02T09:24:42.196522265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:24:42.197127 containerd[1558]: time="2024-07-02T09:24:42.191719904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:24:42.197127 containerd[1558]: time="2024-07-02T09:24:42.197102755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:24:42.197127 containerd[1558]: time="2024-07-02T09:24:42.191863508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:24:42.197232 containerd[1558]: time="2024-07-02T09:24:42.197148691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:24:42.197232 containerd[1558]: time="2024-07-02T09:24:42.197176996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:24:42.197232 containerd[1558]: time="2024-07-02T09:24:42.197214016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:24:42.197403 containerd[1558]: time="2024-07-02T09:24:42.197308965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:24:42.197403 containerd[1558]: time="2024-07-02T09:24:42.197335391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:24:42.218985 kubelet[2299]: E0702 09:24:42.218891 2299 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Jul 2 09:24:42.250214 containerd[1558]: time="2024-07-02T09:24:42.250155877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2dd00b1296751efebad85b9a339853d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d4da5850423ce0cfc512fc8922a4a0c7e5485b84390493ff146fe2d18757257\"" Jul 2 09:24:42.250927 containerd[1558]: time="2024-07-02T09:24:42.250887208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b1fbf7a73aeca4a7a6f4105edaea89e941f33943ad274d2a07eb8629ccec2d9\"" Jul 2 09:24:42.252251 kubelet[2299]: E0702 09:24:42.252165 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:42.252716 kubelet[2299]: E0702 09:24:42.252374 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:42.253368 containerd[1558]: time="2024-07-02T09:24:42.252935556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4072f98d3e259d72566fbb0bf7e4eec1ac14218009257aa84ab0d34b7f3a78c6\"" Jul 2 09:24:42.253950 kubelet[2299]: E0702 09:24:42.253926 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:42.255553 containerd[1558]: time="2024-07-02T09:24:42.255511303Z" level=info msg="CreateContainer within sandbox \"5d4da5850423ce0cfc512fc8922a4a0c7e5485b84390493ff146fe2d18757257\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 09:24:42.255727 containerd[1558]: time="2024-07-02T09:24:42.255514941Z" level=info msg="CreateContainer within sandbox \"0b1fbf7a73aeca4a7a6f4105edaea89e941f33943ad274d2a07eb8629ccec2d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 09:24:42.255727 containerd[1558]: time="2024-07-02T09:24:42.255643592Z" level=info msg="CreateContainer within sandbox \"4072f98d3e259d72566fbb0bf7e4eec1ac14218009257aa84ab0d34b7f3a78c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 09:24:42.280270 containerd[1558]: time="2024-07-02T09:24:42.280218774Z" level=info msg="CreateContainer within sandbox \"4072f98d3e259d72566fbb0bf7e4eec1ac14218009257aa84ab0d34b7f3a78c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19fa2f905237de9dd03e2c36b7ac051a7ad6da2c8398a41a435dd8c1af2154e1\"" Jul 2 09:24:42.280927 containerd[1558]: time="2024-07-02T09:24:42.280898132Z" level=info msg="StartContainer for \"19fa2f905237de9dd03e2c36b7ac051a7ad6da2c8398a41a435dd8c1af2154e1\"" Jul 2 09:24:42.282089 containerd[1558]: time="2024-07-02T09:24:42.281992308Z" level=info msg="CreateContainer within sandbox \"5d4da5850423ce0cfc512fc8922a4a0c7e5485b84390493ff146fe2d18757257\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68d63aa15f8450bce205e682dda2893ccb982da3b5986aacc0b1f25f830b7f2b\"" Jul 2 09:24:42.282384 containerd[1558]: time="2024-07-02T09:24:42.282351597Z" level=info msg="StartContainer for \"68d63aa15f8450bce205e682dda2893ccb982da3b5986aacc0b1f25f830b7f2b\"" Jul 2 09:24:42.284389 containerd[1558]: time="2024-07-02T09:24:42.284279489Z" level=info msg="CreateContainer within sandbox \"0b1fbf7a73aeca4a7a6f4105edaea89e941f33943ad274d2a07eb8629ccec2d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe3249bbfac5e45d5116bcf3761f6a3c27666f7c88c967ec387c5c963a0788cc\"" Jul 2 09:24:42.284748 containerd[1558]: time="2024-07-02T09:24:42.284664084Z" level=info msg="StartContainer for \"fe3249bbfac5e45d5116bcf3761f6a3c27666f7c88c967ec387c5c963a0788cc\"" Jul 2 09:24:42.323235 kubelet[2299]: I0702 09:24:42.322710 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:24:42.323235 kubelet[2299]: E0702 09:24:42.323018 2299 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jul 2 09:24:42.337177 kubelet[2299]: W0702 09:24:42.335422 2299 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:42.337177 kubelet[2299]: E0702 09:24:42.335477 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jul 2 09:24:42.344881 containerd[1558]: time="2024-07-02T09:24:42.344706281Z" level=info msg="StartContainer for \"68d63aa15f8450bce205e682dda2893ccb982da3b5986aacc0b1f25f830b7f2b\" returns successfully" Jul 2 09:24:42.345086 containerd[1558]: time="2024-07-02T09:24:42.344781521Z" level=info msg="StartContainer for \"fe3249bbfac5e45d5116bcf3761f6a3c27666f7c88c967ec387c5c963a0788cc\" returns successfully" Jul 2 09:24:42.345297 containerd[1558]: time="2024-07-02T09:24:42.344784480Z" level=info msg="StartContainer for \"19fa2f905237de9dd03e2c36b7ac051a7ad6da2c8398a41a435dd8c1af2154e1\" returns successfully" Jul 2 09:24:42.842962 kubelet[2299]: E0702 09:24:42.842921 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:42.845563 kubelet[2299]: E0702 09:24:42.845436 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:42.847048 kubelet[2299]: E0702 09:24:42.846487 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:43.850237 kubelet[2299]: E0702 09:24:43.850171 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:43.927082 kubelet[2299]: I0702 09:24:43.926015 2299 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:24:44.621382 kubelet[2299]: E0702 09:24:44.621337 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 09:24:44.660792 kubelet[2299]: I0702 09:24:44.655989 2299 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 09:24:44.803101 kubelet[2299]: I0702 09:24:44.801740 2299 apiserver.go:52] "Watching apiserver" Jul 2 09:24:44.814934 kubelet[2299]: I0702 09:24:44.814894 2299 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:24:44.856473 kubelet[2299]: E0702 09:24:44.855922 2299 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:44.856473 kubelet[2299]: E0702 09:24:44.856405 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:47.209487 systemd[1]: Reloading requested from client PID 2574 ('systemctl') (unit session-7.scope)... Jul 2 09:24:47.209503 systemd[1]: Reloading... Jul 2 09:24:47.272059 zram_generator::config[2609]: No configuration found. Jul 2 09:24:47.359983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:24:47.415804 systemd[1]: Reloading finished in 206 ms. Jul 2 09:24:47.443459 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:47.453841 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:24:47.454256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:47.463252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:24:47.609648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:24:47.612499 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:24:47.666158 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:24:47.666158 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:24:47.666158 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:24:47.666506 kubelet[2668]: I0702 09:24:47.666187 2668 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:24:47.672224 kubelet[2668]: I0702 09:24:47.672072 2668 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 09:24:47.672224 kubelet[2668]: I0702 09:24:47.672101 2668 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:24:47.672335 kubelet[2668]: I0702 09:24:47.672260 2668 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 09:24:47.675064 kubelet[2668]: I0702 09:24:47.673717 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 09:24:47.675064 kubelet[2668]: I0702 09:24:47.674743 2668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:24:47.681204 kubelet[2668]: W0702 09:24:47.681186 2668 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 09:24:47.682353 kubelet[2668]: I0702 09:24:47.682315 2668 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:24:47.682807 kubelet[2668]: I0702 09:24:47.682794 2668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:24:47.682956 kubelet[2668]: I0702 09:24:47.682940 2668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:24:47.683047 kubelet[2668]: I0702 09:24:47.682969 2668 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:24:47.683047 kubelet[2668]: I0702 09:24:47.682978 2668 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:24:47.683047 kubelet[2668]: I0702 09:24:47.683015 2668 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:24:47.683156 kubelet[2668]: I0702 09:24:47.683139 2668 kubelet.go:393] "Attempting to sync node with API server" Jul 2 09:24:47.683185 kubelet[2668]: I0702 09:24:47.683163 2668 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:24:47.683616 kubelet[2668]: I0702 09:24:47.683597 2668 kubelet.go:309] "Adding apiserver pod source" Jul 2 09:24:47.683645 kubelet[2668]: I0702 09:24:47.683630 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:24:47.684455 kubelet[2668]: I0702 09:24:47.684392 2668 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:24:47.684861 kubelet[2668]: I0702 09:24:47.684838 2668 server.go:1232] "Started kubelet" Jul 2 09:24:47.686279 kubelet[2668]: I0702 09:24:47.686245 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:24:47.688335 sudo[2683]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 09:24:47.688581 sudo[2683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 09:24:47.690937 kubelet[2668]: E0702 09:24:47.690204 2668 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 09:24:47.690937 kubelet[2668]: E0702 09:24:47.690251 2668 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:24:47.690937 kubelet[2668]: I0702 09:24:47.690313 2668 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:24:47.690937 kubelet[2668]: I0702 09:24:47.690401 2668 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:24:47.690937 kubelet[2668]: I0702 09:24:47.690511 2668 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:24:47.699176 kubelet[2668]: I0702 09:24:47.699153 2668 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:24:47.703436 kubelet[2668]: I0702 09:24:47.703411 2668 server.go:462] "Adding debug handlers to kubelet server" Jul 2 09:24:47.704735 kubelet[2668]: I0702 09:24:47.699346 2668 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 09:24:47.704948 kubelet[2668]: I0702 09:24:47.704920 2668 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:24:47.711685 kubelet[2668]: I0702 09:24:47.711067 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:24:47.715220 kubelet[2668]: I0702 09:24:47.715142 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:24:47.715220 kubelet[2668]: I0702 09:24:47.715174 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:24:47.715220 kubelet[2668]: I0702 09:24:47.715192 2668 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 09:24:47.715335 kubelet[2668]: E0702 09:24:47.715251 2668 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.784985 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785007 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785023 2668 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785236 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785257 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785263 2668 policy_none.go:49] "None policy: Start" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785894 2668 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.785918 2668 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.786130 2668 state_mem.go:75] "Updated machine memory state" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.787088 2668 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:24:47.787845 kubelet[2668]: I0702 09:24:47.787293 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:24:47.795588 kubelet[2668]: I0702 09:24:47.795563 2668 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 09:24:47.803497 kubelet[2668]: I0702 09:24:47.802811 2668 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 09:24:47.803497 kubelet[2668]: I0702 09:24:47.802889 2668 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 09:24:47.816121 kubelet[2668]: I0702 09:24:47.816084 2668 topology_manager.go:215] "Topology Admit Handler" podUID="2dd00b1296751efebad85b9a339853d5" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:24:47.816223 kubelet[2668]: I0702 09:24:47.816206 2668 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:24:47.816262 kubelet[2668]: I0702 09:24:47.816247 2668 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:24:47.991524 kubelet[2668]: I0702 09:24:47.991411 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:47.991524 kubelet[2668]: I0702 09:24:47.991459 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:47.991524 kubelet[2668]: I0702 09:24:47.991481 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dd00b1296751efebad85b9a339853d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2dd00b1296751efebad85b9a339853d5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:47.991524 kubelet[2668]: I0702 09:24:47.991499 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dd00b1296751efebad85b9a339853d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2dd00b1296751efebad85b9a339853d5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:47.991524 kubelet[2668]: I0702 09:24:47.991528 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dd00b1296751efebad85b9a339853d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2dd00b1296751efebad85b9a339853d5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:47.991714 kubelet[2668]: I0702 09:24:47.991550 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:47.991714 kubelet[2668]: I0702 09:24:47.991571 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:47.991714 kubelet[2668]: I0702 09:24:47.991591 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:24:47.991714 kubelet[2668]: I0702 09:24:47.991618 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:24:48.122329 kubelet[2668]: E0702 09:24:48.122292 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:48.122633 kubelet[2668]: E0702 09:24:48.122606 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:48.122936 kubelet[2668]: E0702 09:24:48.122915 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:48.167293 sudo[2683]: pam_unix(sudo:session): session closed for user root Jul 2 09:24:48.684373 kubelet[2668]: I0702 09:24:48.684333 2668 apiserver.go:52] "Watching apiserver" Jul 2 09:24:48.690501 kubelet[2668]: I0702 09:24:48.690470 2668 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:24:48.741080 kubelet[2668]: E0702 09:24:48.737796 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:48.744975 kubelet[2668]: E0702 09:24:48.744854 2668 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 09:24:48.745368 kubelet[2668]: E0702 09:24:48.745315 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:48.748858 kubelet[2668]: E0702 09:24:48.748834 2668 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 09:24:48.749311 kubelet[2668]: E0702 09:24:48.749297 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:48.768550 kubelet[2668]: I0702 09:24:48.768502 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.768448191 podCreationTimestamp="2024-07-02 09:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:24:48.768077359 +0000 UTC m=+1.151697703" watchObservedRunningTime="2024-07-02 09:24:48.768448191 +0000 UTC m=+1.152068535" Jul 2 09:24:48.768624 kubelet[2668]: I0702 09:24:48.768598 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.768582748 podCreationTimestamp="2024-07-02 09:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:24:48.760409566 +0000 UTC m=+1.144029910" watchObservedRunningTime="2024-07-02 09:24:48.768582748 +0000 UTC m=+1.152203092" Jul 2 09:24:48.775416 kubelet[2668]: I0702 09:24:48.775321 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.775275361 podCreationTimestamp="2024-07-02 09:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:24:48.775082166 +0000 UTC m=+1.158702510" watchObservedRunningTime="2024-07-02 09:24:48.775275361 +0000 UTC m=+1.158895705" Jul 2 09:24:49.740332 kubelet[2668]: E0702 09:24:49.739363 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:49.740332 kubelet[2668]: E0702 09:24:49.740089 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:49.935055 kubelet[2668]: E0702 09:24:49.934837 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:49.964737 sudo[1765]: pam_unix(sudo:session): session closed for user root Jul 2 09:24:49.967352 sshd[1758]: pam_unix(sshd:session): session closed for user core Jul 2 09:24:49.971203 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:24:49.971384 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:54100.service: Deactivated successfully. Jul 2 09:24:49.973785 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:24:49.975018 systemd-logind[1540]: Removed session 7. Jul 2 09:24:51.197623 kubelet[2668]: E0702 09:24:51.197595 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:58.342109 kubelet[2668]: E0702 09:24:58.341947 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:24:59.945583 kubelet[2668]: E0702 09:24:59.945318 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:00.121113 update_engine[1544]: I0702 09:25:00.120867 1544 update_attempter.cc:509] Updating boot flags... Jul 2 09:25:00.141743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2751) Jul 2 09:25:00.178069 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2755) Jul 2 09:25:00.207253 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2755) Jul 2 09:25:01.212020 kubelet[2668]: E0702 09:25:01.211990 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.300890 kubelet[2668]: I0702 09:25:01.300855 2668 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 09:25:01.301251 containerd[1558]: time="2024-07-02T09:25:01.301203871Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:25:01.301542 kubelet[2668]: I0702 09:25:01.301443 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 09:25:01.359507 kubelet[2668]: I0702 09:25:01.357504 2668 topology_manager.go:215] "Topology Admit Handler" podUID="8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-jdh9p" Jul 2 09:25:01.390154 kubelet[2668]: I0702 09:25:01.390097 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-jdh9p\" (UID: \"8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc\") " pod="kube-system/cilium-operator-6bc8ccdb58-jdh9p" Jul 2 09:25:01.390154 kubelet[2668]: I0702 09:25:01.390157 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xljm9\" (UniqueName: \"kubernetes.io/projected/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-kube-api-access-xljm9\") pod \"cilium-operator-6bc8ccdb58-jdh9p\" (UID: \"8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc\") " pod="kube-system/cilium-operator-6bc8ccdb58-jdh9p" Jul 2 09:25:01.447724 kubelet[2668]: I0702 09:25:01.447691 2668 topology_manager.go:215] "Topology Admit Handler" podUID="ce9099ff-5bf0-4f41-9916-346efe393dba" podNamespace="kube-system" podName="kube-proxy-sdpnj" Jul 2 09:25:01.456015 kubelet[2668]: I0702 09:25:01.455965 2668 topology_manager.go:215] "Topology Admit Handler" podUID="d2846e4a-5290-4476-8418-efe18147cab9" podNamespace="kube-system" podName="cilium-bn4k2" Jul 2 09:25:01.490403 kubelet[2668]: I0702 09:25:01.490300 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce9099ff-5bf0-4f41-9916-346efe393dba-xtables-lock\") pod \"kube-proxy-sdpnj\" (UID: \"ce9099ff-5bf0-4f41-9916-346efe393dba\") " pod="kube-system/kube-proxy-sdpnj" Jul 2 09:25:01.490652 kubelet[2668]: I0702 09:25:01.490542 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffw4\" (UniqueName: \"kubernetes.io/projected/ce9099ff-5bf0-4f41-9916-346efe393dba-kube-api-access-sffw4\") pod \"kube-proxy-sdpnj\" (UID: \"ce9099ff-5bf0-4f41-9916-346efe393dba\") " pod="kube-system/kube-proxy-sdpnj" Jul 2 09:25:01.490652 kubelet[2668]: I0702 09:25:01.490573 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-hostproc\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490652 kubelet[2668]: I0702 09:25:01.490591 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-etc-cni-netd\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490652 kubelet[2668]: I0702 09:25:01.490610 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflxl\" (UniqueName: \"kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-kube-api-access-nflxl\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490652 kubelet[2668]: I0702 09:25:01.490629 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce9099ff-5bf0-4f41-9916-346efe393dba-kube-proxy\") pod \"kube-proxy-sdpnj\" (UID: \"ce9099ff-5bf0-4f41-9916-346efe393dba\") " pod="kube-system/kube-proxy-sdpnj" Jul 2 09:25:01.490790 kubelet[2668]: I0702 09:25:01.490688 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2846e4a-5290-4476-8418-efe18147cab9-clustermesh-secrets\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490790 kubelet[2668]: I0702 09:25:01.490727 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-net\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490790 kubelet[2668]: I0702 09:25:01.490750 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-kernel\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490790 kubelet[2668]: I0702 09:25:01.490783 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-cgroup\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490892 kubelet[2668]: I0702 09:25:01.490859 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce9099ff-5bf0-4f41-9916-346efe393dba-lib-modules\") pod \"kube-proxy-sdpnj\" (UID: \"ce9099ff-5bf0-4f41-9916-346efe393dba\") " pod="kube-system/kube-proxy-sdpnj" Jul 2 09:25:01.490917 kubelet[2668]: I0702 09:25:01.490896 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-run\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.490939 kubelet[2668]: I0702 09:25:01.490918 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cni-path\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.491007 kubelet[2668]: I0702 09:25:01.490947 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-lib-modules\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.491007 kubelet[2668]: I0702 09:25:01.490965 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-xtables-lock\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.491007 kubelet[2668]: I0702 09:25:01.490983 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-hubble-tls\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.491007 kubelet[2668]: I0702 09:25:01.491002 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-bpf-maps\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.491118 kubelet[2668]: I0702 09:25:01.491030 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2846e4a-5290-4476-8418-efe18147cab9-cilium-config-path\") pod \"cilium-bn4k2\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " pod="kube-system/cilium-bn4k2" Jul 2 09:25:01.660454 kubelet[2668]: E0702 09:25:01.660405 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.660950 containerd[1558]: time="2024-07-02T09:25:01.660899349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-jdh9p,Uid:8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc,Namespace:kube-system,Attempt:0,}" Jul 2 09:25:01.680224 containerd[1558]: time="2024-07-02T09:25:01.679761544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:25:01.680224 containerd[1558]: time="2024-07-02T09:25:01.680168259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:01.680365 containerd[1558]: time="2024-07-02T09:25:01.680231499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:25:01.680365 containerd[1558]: time="2024-07-02T09:25:01.680258378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:01.717002 containerd[1558]: time="2024-07-02T09:25:01.716948138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-jdh9p,Uid:8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0\"" Jul 2 09:25:01.717812 kubelet[2668]: E0702 09:25:01.717783 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.720357 containerd[1558]: time="2024-07-02T09:25:01.720321422Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 09:25:01.750850 kubelet[2668]: E0702 09:25:01.750734 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.751436 containerd[1558]: time="2024-07-02T09:25:01.751108166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdpnj,Uid:ce9099ff-5bf0-4f41-9916-346efe393dba,Namespace:kube-system,Attempt:0,}" Jul 2 09:25:01.759347 kubelet[2668]: E0702 09:25:01.759325 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.759717 containerd[1558]: time="2024-07-02T09:25:01.759691032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bn4k2,Uid:d2846e4a-5290-4476-8418-efe18147cab9,Namespace:kube-system,Attempt:0,}" Jul 2 09:25:01.772210 containerd[1558]: time="2024-07-02T09:25:01.771573463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:25:01.772357 containerd[1558]: time="2024-07-02T09:25:01.772300855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:01.772357 containerd[1558]: time="2024-07-02T09:25:01.772335055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:25:01.772447 containerd[1558]: time="2024-07-02T09:25:01.772419014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:01.777775 containerd[1558]: time="2024-07-02T09:25:01.777652397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:25:01.777775 containerd[1558]: time="2024-07-02T09:25:01.777724156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:01.777775 containerd[1558]: time="2024-07-02T09:25:01.777740636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:25:01.777934 containerd[1558]: time="2024-07-02T09:25:01.777749996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:01.818690 containerd[1558]: time="2024-07-02T09:25:01.818653230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdpnj,Uid:ce9099ff-5bf0-4f41-9916-346efe393dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"bca9b25630cf0db66abe78cfe1f26eeb5d023d2b2c7334f676be891a2223ca4a\"" Jul 2 09:25:01.819723 kubelet[2668]: E0702 09:25:01.819701 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.822490 containerd[1558]: time="2024-07-02T09:25:01.822211071Z" level=info msg="CreateContainer within sandbox \"bca9b25630cf0db66abe78cfe1f26eeb5d023d2b2c7334f676be891a2223ca4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:25:01.824337 containerd[1558]: time="2024-07-02T09:25:01.824309048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bn4k2,Uid:d2846e4a-5290-4476-8418-efe18147cab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\"" Jul 2 09:25:01.825261 kubelet[2668]: E0702 09:25:01.825239 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:01.886218 containerd[1558]: time="2024-07-02T09:25:01.886161574Z" level=info msg="CreateContainer within sandbox \"bca9b25630cf0db66abe78cfe1f26eeb5d023d2b2c7334f676be891a2223ca4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd86e836902c38aea5815504f31ae5f98abc711eccffbc155fd577751b9894d0\"" Jul 2 09:25:01.886923 containerd[1558]: time="2024-07-02T09:25:01.886902486Z" level=info msg="StartContainer for \"dd86e836902c38aea5815504f31ae5f98abc711eccffbc155fd577751b9894d0\"" Jul 2 09:25:01.935895 containerd[1558]: time="2024-07-02T09:25:01.935844112Z" level=info msg="StartContainer for \"dd86e836902c38aea5815504f31ae5f98abc711eccffbc155fd577751b9894d0\" returns successfully" Jul 2 09:25:02.760823 kubelet[2668]: E0702 09:25:02.760794 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:02.770087 kubelet[2668]: I0702 09:25:02.769937 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sdpnj" podStartSLOduration=1.7699025800000001 podCreationTimestamp="2024-07-02 09:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:25:02.768544434 +0000 UTC m=+15.152164778" watchObservedRunningTime="2024-07-02 09:25:02.76990258 +0000 UTC m=+15.153522924" Jul 2 09:25:02.831896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203176183.mount: Deactivated successfully. Jul 2 09:25:03.072409 containerd[1558]: time="2024-07-02T09:25:03.072311475Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:03.073463 containerd[1558]: time="2024-07-02T09:25:03.073412785Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138266" Jul 2 09:25:03.074857 containerd[1558]: time="2024-07-02T09:25:03.074662452Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:03.076197 containerd[1558]: time="2024-07-02T09:25:03.076010639Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.355649778s" Jul 2 09:25:03.076197 containerd[1558]: time="2024-07-02T09:25:03.076059238Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 09:25:03.077575 containerd[1558]: time="2024-07-02T09:25:03.077296946Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 09:25:03.078718 containerd[1558]: time="2024-07-02T09:25:03.078523774Z" level=info msg="CreateContainer within sandbox \"84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 09:25:03.088761 containerd[1558]: time="2024-07-02T09:25:03.088720073Z" level=info msg="CreateContainer within sandbox \"84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\"" Jul 2 09:25:03.089287 containerd[1558]: time="2024-07-02T09:25:03.089245468Z" level=info msg="StartContainer for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\"" Jul 2 09:25:03.130997 containerd[1558]: time="2024-07-02T09:25:03.130960295Z" level=info msg="StartContainer for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" returns successfully" Jul 2 09:25:03.769704 kubelet[2668]: E0702 09:25:03.769676 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:03.798942 kubelet[2668]: I0702 09:25:03.797383 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-jdh9p" podStartSLOduration=1.440803414 podCreationTimestamp="2024-07-02 09:25:01 +0000 UTC" firstStartedPulling="2024-07-02 09:25:01.719891186 +0000 UTC m=+14.103511530" lastFinishedPulling="2024-07-02 09:25:03.076431995 +0000 UTC m=+15.460052339" observedRunningTime="2024-07-02 09:25:03.795723079 +0000 UTC m=+16.179343423" watchObservedRunningTime="2024-07-02 09:25:03.797344223 +0000 UTC m=+16.180964567" Jul 2 09:25:04.779517 kubelet[2668]: E0702 09:25:04.779483 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:05.513629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129501344.mount: Deactivated successfully. Jul 2 09:25:06.757339 containerd[1558]: time="2024-07-02T09:25:06.757287700Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:06.759225 containerd[1558]: time="2024-07-02T09:25:06.758743568Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651494" Jul 2 09:25:06.761330 containerd[1558]: time="2024-07-02T09:25:06.759871598Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:06.761601 containerd[1558]: time="2024-07-02T09:25:06.761557103Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.684219438s" Jul 2 09:25:06.761653 containerd[1558]: time="2024-07-02T09:25:06.761605423Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 09:25:06.770026 containerd[1558]: time="2024-07-02T09:25:06.769991311Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:25:06.781192 containerd[1558]: time="2024-07-02T09:25:06.781156335Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782\"" Jul 2 09:25:06.781653 containerd[1558]: time="2024-07-02T09:25:06.781632491Z" level=info msg="StartContainer for \"d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782\"" Jul 2 09:25:06.823272 containerd[1558]: time="2024-07-02T09:25:06.823232573Z" level=info msg="StartContainer for \"d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782\" returns successfully" Jul 2 09:25:07.059481 containerd[1558]: time="2024-07-02T09:25:07.058995366Z" level=info msg="shim disconnected" id=d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782 namespace=k8s.io Jul 2 09:25:07.059481 containerd[1558]: time="2024-07-02T09:25:07.059067486Z" level=warning msg="cleaning up after shim disconnected" id=d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782 namespace=k8s.io Jul 2 09:25:07.059481 containerd[1558]: time="2024-07-02T09:25:07.059078486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:25:07.777897 systemd[1]: run-containerd-runc-k8s.io-d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782-runc.5IoaoF.mount: Deactivated successfully. Jul 2 09:25:07.778052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782-rootfs.mount: Deactivated successfully. Jul 2 09:25:07.780144 kubelet[2668]: E0702 09:25:07.780114 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:07.783896 containerd[1558]: time="2024-07-02T09:25:07.782921851Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:25:07.802438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319597369.mount: Deactivated successfully. Jul 2 09:25:07.805806 containerd[1558]: time="2024-07-02T09:25:07.805735543Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632\"" Jul 2 09:25:07.806455 containerd[1558]: time="2024-07-02T09:25:07.806418497Z" level=info msg="StartContainer for \"cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632\"" Jul 2 09:25:07.855196 containerd[1558]: time="2024-07-02T09:25:07.854807459Z" level=info msg="StartContainer for \"cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632\" returns successfully" Jul 2 09:25:07.866349 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:25:07.866618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:25:07.866695 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:25:07.875407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:25:07.888586 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:25:07.896259 containerd[1558]: time="2024-07-02T09:25:07.896185639Z" level=info msg="shim disconnected" id=cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632 namespace=k8s.io Jul 2 09:25:07.896259 containerd[1558]: time="2024-07-02T09:25:07.896251198Z" level=warning msg="cleaning up after shim disconnected" id=cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632 namespace=k8s.io Jul 2 09:25:07.896259 containerd[1558]: time="2024-07-02T09:25:07.896260678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:25:08.777662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632-rootfs.mount: Deactivated successfully. Jul 2 09:25:08.783426 kubelet[2668]: E0702 09:25:08.783394 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:08.785669 containerd[1558]: time="2024-07-02T09:25:08.785633679Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:25:08.822626 containerd[1558]: time="2024-07-02T09:25:08.821774555Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd\"" Jul 2 09:25:08.822626 containerd[1558]: time="2024-07-02T09:25:08.822535669Z" level=info msg="StartContainer for \"8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd\"" Jul 2 09:25:08.875421 containerd[1558]: time="2024-07-02T09:25:08.875361013Z" level=info msg="StartContainer for \"8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd\" returns successfully" Jul 2 09:25:08.914951 containerd[1558]: time="2024-07-02T09:25:08.914846422Z" level=info msg="shim disconnected" id=8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd namespace=k8s.io Jul 2 09:25:08.914951 containerd[1558]: time="2024-07-02T09:25:08.914938301Z" level=warning msg="cleaning up after shim disconnected" id=8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd namespace=k8s.io Jul 2 09:25:08.914951 containerd[1558]: time="2024-07-02T09:25:08.914958501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:25:09.777581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd-rootfs.mount: Deactivated successfully. Jul 2 09:25:09.786169 kubelet[2668]: E0702 09:25:09.786146 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:09.788205 containerd[1558]: time="2024-07-02T09:25:09.788071968Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:25:09.806328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227485888.mount: Deactivated successfully. Jul 2 09:25:09.808171 containerd[1558]: time="2024-07-02T09:25:09.808133937Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d\"" Jul 2 09:25:09.809365 containerd[1558]: time="2024-07-02T09:25:09.808620013Z" level=info msg="StartContainer for \"86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d\"" Jul 2 09:25:09.862942 containerd[1558]: time="2024-07-02T09:25:09.862838604Z" level=info msg="StartContainer for \"86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d\" returns successfully" Jul 2 09:25:09.883307 containerd[1558]: time="2024-07-02T09:25:09.883248090Z" level=info msg="shim disconnected" id=86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d namespace=k8s.io Jul 2 09:25:09.883307 containerd[1558]: time="2024-07-02T09:25:09.883301570Z" level=warning msg="cleaning up after shim disconnected" id=86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d namespace=k8s.io Jul 2 09:25:09.883307 containerd[1558]: time="2024-07-02T09:25:09.883311450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:25:10.777661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d-rootfs.mount: Deactivated successfully. Jul 2 09:25:10.790124 kubelet[2668]: E0702 09:25:10.790080 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:10.793310 containerd[1558]: time="2024-07-02T09:25:10.793192634Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:25:10.812847 containerd[1558]: time="2024-07-02T09:25:10.812724373Z" level=info msg="CreateContainer within sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\"" Jul 2 09:25:10.814122 containerd[1558]: time="2024-07-02T09:25:10.813172849Z" level=info msg="StartContainer for \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\"" Jul 2 09:25:10.861104 containerd[1558]: time="2024-07-02T09:25:10.861061623Z" level=info msg="StartContainer for \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\" returns successfully" Jul 2 09:25:10.929024 kubelet[2668]: I0702 09:25:10.928091 2668 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 09:25:10.947278 kubelet[2668]: I0702 09:25:10.947246 2668 topology_manager.go:215] "Topology Admit Handler" podUID="34b00da4-f38d-4d84-8520-d0e7c67df522" podNamespace="kube-system" podName="coredns-5dd5756b68-l4bkk" Jul 2 09:25:10.948376 kubelet[2668]: I0702 09:25:10.948256 2668 topology_manager.go:215] "Topology Admit Handler" podUID="50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42" podNamespace="kube-system" podName="coredns-5dd5756b68-7gbhx" Jul 2 09:25:10.962389 kubelet[2668]: I0702 09:25:10.962365 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b00da4-f38d-4d84-8520-d0e7c67df522-config-volume\") pod \"coredns-5dd5756b68-l4bkk\" (UID: \"34b00da4-f38d-4d84-8520-d0e7c67df522\") " pod="kube-system/coredns-5dd5756b68-l4bkk" Jul 2 09:25:10.962555 kubelet[2668]: I0702 09:25:10.962542 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w58cq\" (UniqueName: \"kubernetes.io/projected/34b00da4-f38d-4d84-8520-d0e7c67df522-kube-api-access-w58cq\") pod \"coredns-5dd5756b68-l4bkk\" (UID: \"34b00da4-f38d-4d84-8520-d0e7c67df522\") " pod="kube-system/coredns-5dd5756b68-l4bkk" Jul 2 09:25:10.962657 kubelet[2668]: I0702 09:25:10.962646 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hdm\" (UniqueName: \"kubernetes.io/projected/50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42-kube-api-access-j9hdm\") pod \"coredns-5dd5756b68-7gbhx\" (UID: \"50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42\") " pod="kube-system/coredns-5dd5756b68-7gbhx" Jul 2 09:25:10.962777 kubelet[2668]: I0702 09:25:10.962764 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42-config-volume\") pod \"coredns-5dd5756b68-7gbhx\" (UID: \"50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42\") " pod="kube-system/coredns-5dd5756b68-7gbhx" Jul 2 09:25:11.251157 kubelet[2668]: E0702 09:25:11.251123 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:11.251544 kubelet[2668]: E0702 09:25:11.251486 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:11.254393 containerd[1558]: time="2024-07-02T09:25:11.253572259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7gbhx,Uid:50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42,Namespace:kube-system,Attempt:0,}" Jul 2 09:25:11.254393 containerd[1558]: time="2024-07-02T09:25:11.254195414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l4bkk,Uid:34b00da4-f38d-4d84-8520-d0e7c67df522,Namespace:kube-system,Attempt:0,}" Jul 2 09:25:11.793540 kubelet[2668]: E0702 09:25:11.793507 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:11.805629 kubelet[2668]: I0702 09:25:11.805593 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bn4k2" podStartSLOduration=5.869963674 podCreationTimestamp="2024-07-02 09:25:01 +0000 UTC" firstStartedPulling="2024-07-02 09:25:01.826252787 +0000 UTC m=+14.209873131" lastFinishedPulling="2024-07-02 09:25:06.761835181 +0000 UTC m=+19.145455525" observedRunningTime="2024-07-02 09:25:11.804736594 +0000 UTC m=+24.188357018" watchObservedRunningTime="2024-07-02 09:25:11.805546068 +0000 UTC m=+24.189166412" Jul 2 09:25:12.794477 kubelet[2668]: E0702 09:25:12.794449 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:13.068362 systemd-networkd[1242]: cilium_host: Link UP Jul 2 09:25:13.068506 systemd-networkd[1242]: cilium_net: Link UP Jul 2 09:25:13.068644 systemd-networkd[1242]: cilium_net: Gained carrier Jul 2 09:25:13.068775 systemd-networkd[1242]: cilium_host: Gained carrier Jul 2 09:25:13.068872 systemd-networkd[1242]: cilium_net: Gained IPv6LL Jul 2 09:25:13.068986 systemd-networkd[1242]: cilium_host: Gained IPv6LL Jul 2 09:25:13.146157 systemd-networkd[1242]: cilium_vxlan: Link UP Jul 2 09:25:13.146166 systemd-networkd[1242]: cilium_vxlan: Gained carrier Jul 2 09:25:13.426080 kernel: NET: Registered PF_ALG protocol family Jul 2 09:25:13.796122 kubelet[2668]: E0702 09:25:13.795998 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:13.973618 systemd-networkd[1242]: lxc_health: Link UP Jul 2 09:25:13.980785 systemd-networkd[1242]: lxc_health: Gained carrier Jul 2 09:25:14.363093 systemd-networkd[1242]: lxc951869c87862: Link UP Jul 2 09:25:14.371170 kernel: eth0: renamed from tmp1e20a Jul 2 09:25:14.377536 systemd-networkd[1242]: lxc93186e418c7f: Link UP Jul 2 09:25:14.377685 systemd-networkd[1242]: lxc951869c87862: Gained carrier Jul 2 09:25:14.386073 kernel: eth0: renamed from tmp21db1 Jul 2 09:25:14.392715 systemd-networkd[1242]: lxc93186e418c7f: Gained carrier Jul 2 09:25:14.989394 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Jul 2 09:25:15.117429 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jul 2 09:25:15.693426 systemd-networkd[1242]: lxc93186e418c7f: Gained IPv6LL Jul 2 09:25:15.761654 kubelet[2668]: E0702 09:25:15.761624 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:16.140336 systemd-networkd[1242]: lxc951869c87862: Gained IPv6LL Jul 2 09:25:16.738961 kubelet[2668]: I0702 09:25:16.737780 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:25:16.738961 kubelet[2668]: E0702 09:25:16.738589 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:16.805059 kubelet[2668]: E0702 09:25:16.804804 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:17.892768 containerd[1558]: time="2024-07-02T09:25:17.892628607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:25:17.892768 containerd[1558]: time="2024-07-02T09:25:17.892707247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:17.895300 containerd[1558]: time="2024-07-02T09:25:17.894412557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:25:17.895300 containerd[1558]: time="2024-07-02T09:25:17.894459957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:17.900064 containerd[1558]: time="2024-07-02T09:25:17.895815390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:25:17.900064 containerd[1558]: time="2024-07-02T09:25:17.897100823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:17.900064 containerd[1558]: time="2024-07-02T09:25:17.897123422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:25:17.900064 containerd[1558]: time="2024-07-02T09:25:17.897133342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:25:17.916694 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:25:17.917167 systemd-resolved[1441]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:25:17.937188 containerd[1558]: time="2024-07-02T09:25:17.937154601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l4bkk,Uid:34b00da4-f38d-4d84-8520-d0e7c67df522,Namespace:kube-system,Attempt:0,} returns sandbox id \"21db1c44f3df4a1d6c2253b0d4c7817496e9b6145eb3b09ad2c2ca36271f9422\"" Jul 2 09:25:17.937849 kubelet[2668]: E0702 09:25:17.937829 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:17.939103 containerd[1558]: time="2024-07-02T09:25:17.939074470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7gbhx,Uid:50b31ec9-ff53-4f6c-83fc-ef0dd7a2da42,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e20a3a9bf6e19c66971b9c33f33ef34dc7f3f009eee8257f88f41a9b25aea9a\"" Jul 2 09:25:17.939552 kubelet[2668]: E0702 09:25:17.939537 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:17.939923 containerd[1558]: time="2024-07-02T09:25:17.939891466Z" level=info msg="CreateContainer within sandbox \"21db1c44f3df4a1d6c2253b0d4c7817496e9b6145eb3b09ad2c2ca36271f9422\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:25:17.941140 containerd[1558]: time="2024-07-02T09:25:17.941029299Z" level=info msg="CreateContainer within sandbox \"1e20a3a9bf6e19c66971b9c33f33ef34dc7f3f009eee8257f88f41a9b25aea9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:25:17.954659 containerd[1558]: time="2024-07-02T09:25:17.954600784Z" level=info msg="CreateContainer within sandbox \"1e20a3a9bf6e19c66971b9c33f33ef34dc7f3f009eee8257f88f41a9b25aea9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"778529012d86541f77aa4115900e44f02b7796c400035b64939d9eee5dc2cc9b\"" Jul 2 09:25:17.955882 containerd[1558]: time="2024-07-02T09:25:17.955185621Z" level=info msg="StartContainer for \"778529012d86541f77aa4115900e44f02b7796c400035b64939d9eee5dc2cc9b\"" Jul 2 09:25:17.959621 containerd[1558]: time="2024-07-02T09:25:17.959586757Z" level=info msg="CreateContainer within sandbox \"21db1c44f3df4a1d6c2253b0d4c7817496e9b6145eb3b09ad2c2ca36271f9422\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb0808f26ba8c427f4976d62a1ff1389b65b9ad07644814ab7a865fda14d5fae\"" Jul 2 09:25:17.960240 containerd[1558]: time="2024-07-02T09:25:17.960214473Z" level=info msg="StartContainer for \"bb0808f26ba8c427f4976d62a1ff1389b65b9ad07644814ab7a865fda14d5fae\"" Jul 2 09:25:18.005466 containerd[1558]: time="2024-07-02T09:25:18.005427704Z" level=info msg="StartContainer for \"778529012d86541f77aa4115900e44f02b7796c400035b64939d9eee5dc2cc9b\" returns successfully" Jul 2 09:25:18.017887 containerd[1558]: time="2024-07-02T09:25:18.017780598Z" level=info msg="StartContainer for \"bb0808f26ba8c427f4976d62a1ff1389b65b9ad07644814ab7a865fda14d5fae\" returns successfully" Jul 2 09:25:18.585294 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:52508.service - OpenSSH per-connection server daemon (10.0.0.1:52508). Jul 2 09:25:18.627815 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 52508 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:18.629203 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:18.632907 systemd-logind[1540]: New session 8 of user core. Jul 2 09:25:18.642326 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 09:25:18.812482 kubelet[2668]: E0702 09:25:18.811605 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:18.817800 kubelet[2668]: E0702 09:25:18.817760 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:18.836126 kubelet[2668]: I0702 09:25:18.835639 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-l4bkk" podStartSLOduration=17.835603781 podCreationTimestamp="2024-07-02 09:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:25:18.821735695 +0000 UTC m=+31.205356039" watchObservedRunningTime="2024-07-02 09:25:18.835603781 +0000 UTC m=+31.219224125" Jul 2 09:25:18.852875 sshd[4052]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:18.859323 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:52508.service: Deactivated successfully. Jul 2 09:25:18.863966 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Jul 2 09:25:18.864852 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 09:25:18.866438 systemd-logind[1540]: Removed session 8. Jul 2 09:25:19.816336 kubelet[2668]: E0702 09:25:19.816296 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:19.816723 kubelet[2668]: E0702 09:25:19.816418 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:20.818289 kubelet[2668]: E0702 09:25:20.818250 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:25:23.875838 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:36454.service - OpenSSH per-connection server daemon (10.0.0.1:36454). Jul 2 09:25:23.921268 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 36454 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:23.922986 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:23.929496 systemd-logind[1540]: New session 9 of user core. Jul 2 09:25:23.945367 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 09:25:24.061950 sshd[4075]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:24.065386 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:36454.service: Deactivated successfully. Jul 2 09:25:24.067352 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Jul 2 09:25:24.067513 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 09:25:24.068801 systemd-logind[1540]: Removed session 9. Jul 2 09:25:29.071268 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:36460.service - OpenSSH per-connection server daemon (10.0.0.1:36460). Jul 2 09:25:29.100589 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 36460 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:29.101808 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:29.105925 systemd-logind[1540]: New session 10 of user core. Jul 2 09:25:29.115260 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:25:29.225259 sshd[4091]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:29.233263 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:36462.service - OpenSSH per-connection server daemon (10.0.0.1:36462). Jul 2 09:25:29.233654 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:36460.service: Deactivated successfully. Jul 2 09:25:29.235444 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:25:29.237612 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:25:29.239582 systemd-logind[1540]: Removed session 10. Jul 2 09:25:29.264724 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 36462 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:29.265872 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:29.269850 systemd-logind[1540]: New session 11 of user core. Jul 2 09:25:29.278265 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:25:29.941407 sshd[4104]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:29.955694 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:36476.service - OpenSSH per-connection server daemon (10.0.0.1:36476). Jul 2 09:25:29.957380 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:36462.service: Deactivated successfully. Jul 2 09:25:29.966379 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:25:29.967762 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:25:29.968726 systemd-logind[1540]: Removed session 11. Jul 2 09:25:29.999624 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 36476 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:30.000815 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:30.005084 systemd-logind[1540]: New session 12 of user core. Jul 2 09:25:30.011273 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:25:30.122463 sshd[4118]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:30.125127 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:25:30.125279 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:36476.service: Deactivated successfully. Jul 2 09:25:30.128187 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:25:30.129277 systemd-logind[1540]: Removed session 12. Jul 2 09:25:35.138382 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:52176.service - OpenSSH per-connection server daemon (10.0.0.1:52176). Jul 2 09:25:35.193215 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 52176 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:35.194378 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:35.197744 systemd-logind[1540]: New session 13 of user core. Jul 2 09:25:35.207311 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:25:35.319516 sshd[4139]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:35.322970 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:52176.service: Deactivated successfully. Jul 2 09:25:35.324912 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:25:35.324974 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:25:35.326146 systemd-logind[1540]: Removed session 13. Jul 2 09:25:40.338285 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:47944.service - OpenSSH per-connection server daemon (10.0.0.1:47944). Jul 2 09:25:40.371049 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 47944 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:40.372497 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:40.376344 systemd-logind[1540]: New session 14 of user core. Jul 2 09:25:40.386272 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:25:40.497676 sshd[4155]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:40.510252 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Jul 2 09:25:40.510611 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:47944.service: Deactivated successfully. Jul 2 09:25:40.515714 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:25:40.516523 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:25:40.520614 systemd-logind[1540]: Removed session 14. Jul 2 09:25:40.544069 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:40.544794 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:40.548707 systemd-logind[1540]: New session 15 of user core. Jul 2 09:25:40.563318 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:25:40.787842 sshd[4167]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:40.807267 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:47952.service - OpenSSH per-connection server daemon (10.0.0.1:47952). Jul 2 09:25:40.807664 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:47948.service: Deactivated successfully. Jul 2 09:25:40.809279 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:25:40.811450 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:25:40.812989 systemd-logind[1540]: Removed session 15. Jul 2 09:25:40.847825 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 47952 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:40.849062 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:40.852737 systemd-logind[1540]: New session 16 of user core. Jul 2 09:25:40.860242 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:25:41.664708 sshd[4181]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:41.673532 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:47964.service - OpenSSH per-connection server daemon (10.0.0.1:47964). Jul 2 09:25:41.674340 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:47952.service: Deactivated successfully. Jul 2 09:25:41.678925 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:25:41.681090 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:25:41.682778 systemd-logind[1540]: Removed session 16. Jul 2 09:25:41.710436 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 47964 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:41.711701 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:41.715885 systemd-logind[1540]: New session 17 of user core. Jul 2 09:25:41.730498 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:25:42.008894 sshd[4201]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:42.021260 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:47970.service - OpenSSH per-connection server daemon (10.0.0.1:47970). Jul 2 09:25:42.021682 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:47964.service: Deactivated successfully. Jul 2 09:25:42.023336 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:25:42.025772 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:25:42.026717 systemd-logind[1540]: Removed session 17. Jul 2 09:25:42.058117 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 47970 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:42.059330 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:42.063128 systemd-logind[1540]: New session 18 of user core. Jul 2 09:25:42.080333 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:25:42.188686 sshd[4214]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:42.191452 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:47970.service: Deactivated successfully. Jul 2 09:25:42.195053 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:25:42.195614 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:25:42.197644 systemd-logind[1540]: Removed session 18. Jul 2 09:25:47.198280 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:47980.service - OpenSSH per-connection server daemon (10.0.0.1:47980). Jul 2 09:25:47.227322 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 47980 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:47.228451 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:47.232835 systemd-logind[1540]: New session 19 of user core. Jul 2 09:25:47.246245 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:25:47.350624 sshd[4235]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:47.353986 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:47980.service: Deactivated successfully. Jul 2 09:25:47.355891 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:25:47.356101 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:25:47.356948 systemd-logind[1540]: Removed session 19. Jul 2 09:25:52.361289 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:59096.service - OpenSSH per-connection server daemon (10.0.0.1:59096). Jul 2 09:25:52.390554 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 59096 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:52.391880 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:52.396321 systemd-logind[1540]: New session 20 of user core. Jul 2 09:25:52.403259 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:25:52.508976 sshd[4254]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:52.512093 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:59096.service: Deactivated successfully. Jul 2 09:25:52.514042 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:25:52.514134 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:25:52.515723 systemd-logind[1540]: Removed session 20. Jul 2 09:25:57.522892 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:59112.service - OpenSSH per-connection server daemon (10.0.0.1:59112). Jul 2 09:25:57.552271 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 59112 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:57.553429 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:57.557081 systemd-logind[1540]: New session 21 of user core. Jul 2 09:25:57.563352 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:25:57.697232 sshd[4270]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:57.702208 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:59112.service: Deactivated successfully. Jul 2 09:25:57.704338 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:25:57.705415 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:25:57.706867 systemd-logind[1540]: Removed session 21. Jul 2 09:26:00.716163 kubelet[2668]: E0702 09:26:00.716125 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:02.708270 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:43340.service - OpenSSH per-connection server daemon (10.0.0.1:43340). Jul 2 09:26:02.738731 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 43340 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:02.740012 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:02.744307 systemd-logind[1540]: New session 22 of user core. Jul 2 09:26:02.755337 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:26:02.864487 sshd[4287]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:02.873372 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:43342.service - OpenSSH per-connection server daemon (10.0.0.1:43342). Jul 2 09:26:02.873819 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:43340.service: Deactivated successfully. Jul 2 09:26:02.875842 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:26:02.877418 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:26:02.878738 systemd-logind[1540]: Removed session 22. Jul 2 09:26:02.902928 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 43342 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:02.904170 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:02.908704 systemd-logind[1540]: New session 23 of user core. Jul 2 09:26:02.922250 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:26:04.770158 kubelet[2668]: I0702 09:26:04.770120 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7gbhx" podStartSLOduration=63.770076409 podCreationTimestamp="2024-07-02 09:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:25:18.845254609 +0000 UTC m=+31.228874953" watchObservedRunningTime="2024-07-02 09:26:04.770076409 +0000 UTC m=+77.153696793" Jul 2 09:26:04.775362 containerd[1558]: time="2024-07-02T09:26:04.775315320Z" level=info msg="StopContainer for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" with timeout 30 (s)" Jul 2 09:26:04.777171 containerd[1558]: time="2024-07-02T09:26:04.775713363Z" level=info msg="Stop container \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" with signal terminated" Jul 2 09:26:04.809044 containerd[1558]: time="2024-07-02T09:26:04.808989725Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:26:04.812403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb-rootfs.mount: Deactivated successfully. Jul 2 09:26:04.815697 containerd[1558]: time="2024-07-02T09:26:04.815555765Z" level=info msg="StopContainer for \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\" with timeout 2 (s)" Jul 2 09:26:04.815862 containerd[1558]: time="2024-07-02T09:26:04.815809566Z" level=info msg="Stop container \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\" with signal terminated" Jul 2 09:26:04.819431 containerd[1558]: time="2024-07-02T09:26:04.819382348Z" level=info msg="shim disconnected" id=4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb namespace=k8s.io Jul 2 09:26:04.819431 containerd[1558]: time="2024-07-02T09:26:04.819432188Z" level=warning msg="cleaning up after shim disconnected" id=4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb namespace=k8s.io Jul 2 09:26:04.819562 containerd[1558]: time="2024-07-02T09:26:04.819441148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:04.821893 systemd-networkd[1242]: lxc_health: Link DOWN Jul 2 09:26:04.821899 systemd-networkd[1242]: lxc_health: Lost carrier Jul 2 09:26:04.834274 containerd[1558]: time="2024-07-02T09:26:04.834225718Z" level=info msg="StopContainer for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" returns successfully" Jul 2 09:26:04.834875 containerd[1558]: time="2024-07-02T09:26:04.834845522Z" level=info msg="StopPodSandbox for \"84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0\"" Jul 2 09:26:04.835082 containerd[1558]: time="2024-07-02T09:26:04.835019123Z" level=info msg="Container to stop \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:26:04.836860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0-shm.mount: Deactivated successfully. Jul 2 09:26:04.863941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0-rootfs.mount: Deactivated successfully. Jul 2 09:26:04.870618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd-rootfs.mount: Deactivated successfully. Jul 2 09:26:04.871834 containerd[1558]: time="2024-07-02T09:26:04.871677625Z" level=info msg="shim disconnected" id=84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0 namespace=k8s.io Jul 2 09:26:04.871834 containerd[1558]: time="2024-07-02T09:26:04.871735826Z" level=warning msg="cleaning up after shim disconnected" id=84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0 namespace=k8s.io Jul 2 09:26:04.871834 containerd[1558]: time="2024-07-02T09:26:04.871745866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:04.872199 containerd[1558]: time="2024-07-02T09:26:04.871748866Z" level=info msg="shim disconnected" id=32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd namespace=k8s.io Jul 2 09:26:04.872199 containerd[1558]: time="2024-07-02T09:26:04.871938467Z" level=warning msg="cleaning up after shim disconnected" id=32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd namespace=k8s.io Jul 2 09:26:04.872199 containerd[1558]: time="2024-07-02T09:26:04.871948947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:04.884808 containerd[1558]: time="2024-07-02T09:26:04.884769385Z" level=info msg="TearDown network for sandbox \"84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0\" successfully" Jul 2 09:26:04.884808 containerd[1558]: time="2024-07-02T09:26:04.884799465Z" level=info msg="StopPodSandbox for \"84e08bb3c51069050cea3390f8eb4b51c6e0067930818628fd098bb637a820d0\" returns successfully" Jul 2 09:26:04.886403 containerd[1558]: time="2024-07-02T09:26:04.886372435Z" level=info msg="StopContainer for \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\" returns successfully" Jul 2 09:26:04.888805 containerd[1558]: time="2024-07-02T09:26:04.888613968Z" level=info msg="StopPodSandbox for \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\"" Jul 2 09:26:04.888805 containerd[1558]: time="2024-07-02T09:26:04.888730649Z" level=info msg="Container to stop \"d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:26:04.888805 containerd[1558]: time="2024-07-02T09:26:04.888770369Z" level=info msg="Container to stop \"cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:26:04.888805 containerd[1558]: time="2024-07-02T09:26:04.888780009Z" level=info msg="Container to stop \"86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:26:04.889069 containerd[1558]: time="2024-07-02T09:26:04.888790049Z" level=info msg="Container to stop \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:26:04.889069 containerd[1558]: time="2024-07-02T09:26:04.888971210Z" level=info msg="Container to stop \"8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:26:04.906319 kubelet[2668]: I0702 09:26:04.905916 2668 scope.go:117] "RemoveContainer" containerID="4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb" Jul 2 09:26:04.910959 containerd[1558]: time="2024-07-02T09:26:04.910874183Z" level=info msg="RemoveContainer for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\"" Jul 2 09:26:04.918874 containerd[1558]: time="2024-07-02T09:26:04.916079295Z" level=info msg="RemoveContainer for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" returns successfully" Jul 2 09:26:04.918874 containerd[1558]: time="2024-07-02T09:26:04.916503098Z" level=error msg="ContainerStatus for \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\": not found" Jul 2 09:26:04.918874 containerd[1558]: time="2024-07-02T09:26:04.918458789Z" level=info msg="shim disconnected" id=b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea namespace=k8s.io Jul 2 09:26:04.918874 containerd[1558]: time="2024-07-02T09:26:04.918493390Z" level=warning msg="cleaning up after shim disconnected" id=b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea namespace=k8s.io Jul 2 09:26:04.918874 containerd[1558]: time="2024-07-02T09:26:04.918501350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:04.919106 kubelet[2668]: I0702 09:26:04.916343 2668 scope.go:117] "RemoveContainer" containerID="4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb" Jul 2 09:26:04.919106 kubelet[2668]: E0702 09:26:04.917801 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\": not found" containerID="4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb" Jul 2 09:26:04.919106 kubelet[2668]: I0702 09:26:04.917913 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb"} err="failed to get container status \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e3b9e5e859cba78b4affca225c2a0f315ff061f2659dfb10a90e307a6e8f3cb\": not found" Jul 2 09:26:04.930466 containerd[1558]: time="2024-07-02T09:26:04.930234021Z" level=info msg="TearDown network for sandbox \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" successfully" Jul 2 09:26:04.930466 containerd[1558]: time="2024-07-02T09:26:04.930260101Z" level=info msg="StopPodSandbox for \"b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea\" returns successfully" Jul 2 09:26:04.966905 kubelet[2668]: I0702 09:26:04.966865 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nflxl\" (UniqueName: \"kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-kube-api-access-nflxl\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.966905 kubelet[2668]: I0702 09:26:04.966908 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-kernel\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967058 kubelet[2668]: I0702 09:26:04.966927 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-net\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967058 kubelet[2668]: I0702 09:26:04.966944 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-lib-modules\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967058 kubelet[2668]: I0702 09:26:04.966965 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-hubble-tls\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967058 kubelet[2668]: I0702 09:26:04.966986 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-cilium-config-path\") pod \"8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc\" (UID: \"8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc\") " Jul 2 09:26:04.967058 kubelet[2668]: I0702 09:26:04.966977 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967058 kubelet[2668]: I0702 09:26:04.967001 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-run\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967248 kubelet[2668]: I0702 09:26:04.967020 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2846e4a-5290-4476-8418-efe18147cab9-cilium-config-path\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967248 kubelet[2668]: I0702 09:26:04.967050 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-bpf-maps\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967248 kubelet[2668]: I0702 09:26:04.967022 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967248 kubelet[2668]: I0702 09:26:04.967068 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cni-path\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967248 kubelet[2668]: I0702 09:26:04.967070 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967363 kubelet[2668]: I0702 09:26:04.967030 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967363 kubelet[2668]: I0702 09:26:04.967086 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-etc-cni-netd\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967363 kubelet[2668]: I0702 09:26:04.967108 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967363 kubelet[2668]: I0702 09:26:04.967121 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-cgroup\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967363 kubelet[2668]: I0702 09:26:04.967129 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967143 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-xtables-lock\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967167 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2846e4a-5290-4476-8418-efe18147cab9-clustermesh-secrets\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967187 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xljm9\" (UniqueName: \"kubernetes.io/projected/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-kube-api-access-xljm9\") pod \"8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc\" (UID: \"8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc\") " Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967205 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-hostproc\") pod \"d2846e4a-5290-4476-8418-efe18147cab9\" (UID: \"d2846e4a-5290-4476-8418-efe18147cab9\") " Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967239 2668 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967250 2668 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:04.967472 kubelet[2668]: I0702 09:26:04.967260 2668 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:04.967621 kubelet[2668]: I0702 09:26:04.967269 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:04.967621 kubelet[2668]: I0702 09:26:04.967279 2668 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:04.967621 kubelet[2668]: I0702 09:26:04.967288 2668 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:04.967621 kubelet[2668]: I0702 09:26:04.967304 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967621 kubelet[2668]: I0702 09:26:04.967088 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967621 kubelet[2668]: I0702 09:26:04.967324 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.967760 kubelet[2668]: I0702 09:26:04.967338 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:26:04.969732 kubelet[2668]: I0702 09:26:04.969311 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc" (UID: "8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:26:04.969732 kubelet[2668]: I0702 09:26:04.969586 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2846e4a-5290-4476-8418-efe18147cab9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:26:04.970983 kubelet[2668]: I0702 09:26:04.970947 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-kube-api-access-nflxl" (OuterVolumeSpecName: "kube-api-access-nflxl") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "kube-api-access-nflxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:26:04.971065 kubelet[2668]: I0702 09:26:04.970986 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2846e4a-5290-4476-8418-efe18147cab9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:26:04.971418 kubelet[2668]: I0702 09:26:04.971384 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2846e4a-5290-4476-8418-efe18147cab9" (UID: "d2846e4a-5290-4476-8418-efe18147cab9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:26:04.971837 kubelet[2668]: I0702 09:26:04.971805 2668 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-kube-api-access-xljm9" (OuterVolumeSpecName: "kube-api-access-xljm9") pod "8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc" (UID: "8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc"). InnerVolumeSpecName "kube-api-access-xljm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:26:05.068202 kubelet[2668]: I0702 09:26:05.068134 2668 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nflxl\" (UniqueName: \"kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-kube-api-access-nflxl\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068202 kubelet[2668]: I0702 09:26:05.068161 2668 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2846e4a-5290-4476-8418-efe18147cab9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068202 kubelet[2668]: I0702 09:26:05.068172 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068202 kubelet[2668]: I0702 09:26:05.068184 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2846e4a-5290-4476-8418-efe18147cab9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068202 kubelet[2668]: I0702 09:26:05.068193 2668 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068202 kubelet[2668]: I0702 09:26:05.068202 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068375 kubelet[2668]: I0702 09:26:05.068211 2668 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2846e4a-5290-4476-8418-efe18147cab9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068375 kubelet[2668]: I0702 09:26:05.068220 2668 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068375 kubelet[2668]: I0702 09:26:05.068230 2668 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xljm9\" (UniqueName: \"kubernetes.io/projected/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc-kube-api-access-xljm9\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.068375 kubelet[2668]: I0702 09:26:05.068238 2668 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2846e4a-5290-4476-8418-efe18147cab9-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 09:26:05.717744 kubelet[2668]: I0702 09:26:05.717697 2668 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc" path="/var/lib/kubelet/pods/8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc/volumes" Jul 2 09:26:05.796154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea-rootfs.mount: Deactivated successfully. Jul 2 09:26:05.796307 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b266fd1a75f77bf8567ad41dfa961c11711dc82d582454bb11bda156c998ffea-shm.mount: Deactivated successfully. Jul 2 09:26:05.796393 systemd[1]: var-lib-kubelet-pods-d2846e4a\x2d5290\x2d4476\x2d8418\x2defe18147cab9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnflxl.mount: Deactivated successfully. Jul 2 09:26:05.796476 systemd[1]: var-lib-kubelet-pods-d2846e4a\x2d5290\x2d4476\x2d8418\x2defe18147cab9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 09:26:05.796553 systemd[1]: var-lib-kubelet-pods-d2846e4a\x2d5290\x2d4476\x2d8418\x2defe18147cab9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 09:26:05.796624 systemd[1]: var-lib-kubelet-pods-8223be1a\x2d0d09\x2d4a88\x2db3c6\x2dad4e6a3d66dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxljm9.mount: Deactivated successfully. Jul 2 09:26:05.918930 kubelet[2668]: I0702 09:26:05.918820 2668 scope.go:117] "RemoveContainer" containerID="32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd" Jul 2 09:26:05.920565 containerd[1558]: time="2024-07-02T09:26:05.920280422Z" level=info msg="RemoveContainer for \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\"" Jul 2 09:26:05.924092 containerd[1558]: time="2024-07-02T09:26:05.923254160Z" level=info msg="RemoveContainer for \"32ea7589746b3feb8600d45451876f32e1bc28f0f701f13ed622c03ae7cc05fd\" returns successfully" Jul 2 09:26:05.924162 kubelet[2668]: I0702 09:26:05.923423 2668 scope.go:117] "RemoveContainer" containerID="86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d" Jul 2 09:26:05.938024 containerd[1558]: time="2024-07-02T09:26:05.937993646Z" level=info msg="RemoveContainer for \"86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d\"" Jul 2 09:26:05.940479 containerd[1558]: time="2024-07-02T09:26:05.940440420Z" level=info msg="RemoveContainer for \"86f7db1d35e4278f0c495dcd97662837b0652c8c155e1a9ff7aefc5b597ece2d\" returns successfully" Jul 2 09:26:05.940659 kubelet[2668]: I0702 09:26:05.940604 2668 scope.go:117] "RemoveContainer" containerID="8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd" Jul 2 09:26:05.941573 containerd[1558]: time="2024-07-02T09:26:05.941549667Z" level=info msg="RemoveContainer for \"8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd\"" Jul 2 09:26:05.943514 containerd[1558]: time="2024-07-02T09:26:05.943481078Z" level=info msg="RemoveContainer for \"8fba75da81f1838119112b7ecc9d39221932353267d25fdc6974093e8defc4dd\" returns successfully" Jul 2 09:26:05.943671 kubelet[2668]: I0702 09:26:05.943639 2668 scope.go:117] "RemoveContainer" containerID="cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632" Jul 2 09:26:05.944513 containerd[1558]: time="2024-07-02T09:26:05.944477844Z" level=info msg="RemoveContainer for \"cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632\"" Jul 2 09:26:05.946676 containerd[1558]: time="2024-07-02T09:26:05.946630576Z" level=info msg="RemoveContainer for \"cee7357e5d7b43505f830b33d4c41a42cb6a351e83aab35b27d232fadc8a2632\" returns successfully" Jul 2 09:26:05.946818 kubelet[2668]: I0702 09:26:05.946787 2668 scope.go:117] "RemoveContainer" containerID="d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782" Jul 2 09:26:05.947906 containerd[1558]: time="2024-07-02T09:26:05.947694183Z" level=info msg="RemoveContainer for \"d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782\"" Jul 2 09:26:05.949749 containerd[1558]: time="2024-07-02T09:26:05.949675714Z" level=info msg="RemoveContainer for \"d8e22a2cd064ece5204ab6719b7a2032f6becec34e8ae0e1fd906febe138f782\" returns successfully" Jul 2 09:26:06.716529 kubelet[2668]: E0702 09:26:06.716501 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:06.737019 sshd[4299]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:06.750270 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:43344.service - OpenSSH per-connection server daemon (10.0.0.1:43344). Jul 2 09:26:06.750663 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:43342.service: Deactivated successfully. Jul 2 09:26:06.755427 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:26:06.756258 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:26:06.757950 systemd-logind[1540]: Removed session 23. Jul 2 09:26:06.782137 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 43344 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:06.782956 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:06.787599 systemd-logind[1540]: New session 24 of user core. Jul 2 09:26:06.798350 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:26:07.718349 kubelet[2668]: I0702 09:26:07.718304 2668 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d2846e4a-5290-4476-8418-efe18147cab9" path="/var/lib/kubelet/pods/d2846e4a-5290-4476-8418-efe18147cab9/volumes" Jul 2 09:26:07.806848 kubelet[2668]: E0702 09:26:07.806791 2668 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 09:26:08.154502 sshd[4468]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:08.163773 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:43350.service - OpenSSH per-connection server daemon (10.0.0.1:43350). Jul 2 09:26:08.165158 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:43344.service: Deactivated successfully. Jul 2 09:26:08.172815 kubelet[2668]: I0702 09:26:08.172778 2668 topology_manager.go:215] "Topology Admit Handler" podUID="ff1fb695-ccb6-424c-b744-28f3dbf7195c" podNamespace="kube-system" podName="cilium-kf95f" Jul 2 09:26:08.172965 kubelet[2668]: E0702 09:26:08.172834 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2846e4a-5290-4476-8418-efe18147cab9" containerName="clean-cilium-state" Jul 2 09:26:08.172965 kubelet[2668]: E0702 09:26:08.172844 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2846e4a-5290-4476-8418-efe18147cab9" containerName="mount-cgroup" Jul 2 09:26:08.172965 kubelet[2668]: E0702 09:26:08.172852 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2846e4a-5290-4476-8418-efe18147cab9" containerName="mount-bpf-fs" Jul 2 09:26:08.172965 kubelet[2668]: E0702 09:26:08.172862 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc" containerName="cilium-operator" Jul 2 09:26:08.172965 kubelet[2668]: E0702 09:26:08.172868 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2846e4a-5290-4476-8418-efe18147cab9" containerName="apply-sysctl-overwrites" Jul 2 09:26:08.172965 kubelet[2668]: E0702 09:26:08.172874 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2846e4a-5290-4476-8418-efe18147cab9" containerName="cilium-agent" Jul 2 09:26:08.172965 kubelet[2668]: I0702 09:26:08.172894 2668 memory_manager.go:346] "RemoveStaleState removing state" podUID="8223be1a-0d09-4a88-b3c6-ad4e6a3d66dc" containerName="cilium-operator" Jul 2 09:26:08.172965 kubelet[2668]: I0702 09:26:08.172901 2668 memory_manager.go:346] "RemoveStaleState removing state" podUID="d2846e4a-5290-4476-8418-efe18147cab9" containerName="cilium-agent" Jul 2 09:26:08.174694 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:26:08.177108 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:26:08.188789 kubelet[2668]: I0702 09:26:08.188749 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-host-proc-sys-kernel\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.188789 kubelet[2668]: I0702 09:26:08.188793 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-cilium-cgroup\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.189024 kubelet[2668]: I0702 09:26:08.188817 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff1fb695-ccb6-424c-b744-28f3dbf7195c-cilium-ipsec-secrets\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.189024 kubelet[2668]: I0702 09:26:08.188837 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-host-proc-sys-net\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.189024 kubelet[2668]: I0702 09:26:08.188857 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-cni-path\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.189024 kubelet[2668]: I0702 09:26:08.188876 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-xtables-lock\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.189024 kubelet[2668]: I0702 09:26:08.188894 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-hostproc\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.189024 kubelet[2668]: I0702 09:26:08.188915 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-bpf-maps\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.190473 kubelet[2668]: I0702 09:26:08.188933 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-etc-cni-netd\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.190473 kubelet[2668]: I0702 09:26:08.188954 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff1fb695-ccb6-424c-b744-28f3dbf7195c-cilium-config-path\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.190473 kubelet[2668]: I0702 09:26:08.188973 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff1fb695-ccb6-424c-b744-28f3dbf7195c-hubble-tls\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.190473 kubelet[2668]: I0702 09:26:08.188994 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjgv\" (UniqueName: \"kubernetes.io/projected/ff1fb695-ccb6-424c-b744-28f3dbf7195c-kube-api-access-dbjgv\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.190473 kubelet[2668]: I0702 09:26:08.189049 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-lib-modules\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.195078 kubelet[2668]: I0702 09:26:08.191124 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff1fb695-ccb6-424c-b744-28f3dbf7195c-clustermesh-secrets\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.195078 kubelet[2668]: I0702 09:26:08.191176 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff1fb695-ccb6-424c-b744-28f3dbf7195c-cilium-run\") pod \"cilium-kf95f\" (UID: \"ff1fb695-ccb6-424c-b744-28f3dbf7195c\") " pod="kube-system/cilium-kf95f" Jul 2 09:26:08.195832 systemd-logind[1540]: Removed session 24. Jul 2 09:26:08.219407 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 43350 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:08.219904 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:08.226773 systemd-logind[1540]: New session 25 of user core. Jul 2 09:26:08.235372 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:26:08.284603 sshd[4482]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:08.294737 systemd[1]: Started sshd@25-10.0.0.144:22-10.0.0.1:43360.service - OpenSSH per-connection server daemon (10.0.0.1:43360). Jul 2 09:26:08.295177 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:43350.service: Deactivated successfully. Jul 2 09:26:08.305359 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:26:08.311178 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:26:08.314524 systemd-logind[1540]: Removed session 25. Jul 2 09:26:08.332664 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 43360 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:08.333831 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:08.337201 systemd-logind[1540]: New session 26 of user core. Jul 2 09:26:08.347254 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:26:08.483997 kubelet[2668]: E0702 09:26:08.483879 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:08.484417 containerd[1558]: time="2024-07-02T09:26:08.484377363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kf95f,Uid:ff1fb695-ccb6-424c-b744-28f3dbf7195c,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:08.502535 containerd[1558]: time="2024-07-02T09:26:08.502234256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:08.502535 containerd[1558]: time="2024-07-02T09:26:08.502304616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:08.502535 containerd[1558]: time="2024-07-02T09:26:08.502324336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:08.502535 containerd[1558]: time="2024-07-02T09:26:08.502338096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:08.531858 containerd[1558]: time="2024-07-02T09:26:08.531750009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kf95f,Uid:ff1fb695-ccb6-424c-b744-28f3dbf7195c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\"" Jul 2 09:26:08.532373 kubelet[2668]: E0702 09:26:08.532350 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:08.534570 containerd[1558]: time="2024-07-02T09:26:08.534539064Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:26:08.548574 containerd[1558]: time="2024-07-02T09:26:08.548525777Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0100c15092bd13d3c51d23e7e7268e33b8a090d30c6ae15ef1647784cae81c72\"" Jul 2 09:26:08.550638 containerd[1558]: time="2024-07-02T09:26:08.549839383Z" level=info msg="StartContainer for \"0100c15092bd13d3c51d23e7e7268e33b8a090d30c6ae15ef1647784cae81c72\"" Jul 2 09:26:08.598398 containerd[1558]: time="2024-07-02T09:26:08.598293716Z" level=info msg="StartContainer for \"0100c15092bd13d3c51d23e7e7268e33b8a090d30c6ae15ef1647784cae81c72\" returns successfully" Jul 2 09:26:08.645255 containerd[1558]: time="2024-07-02T09:26:08.645107239Z" level=info msg="shim disconnected" id=0100c15092bd13d3c51d23e7e7268e33b8a090d30c6ae15ef1647784cae81c72 namespace=k8s.io Jul 2 09:26:08.645255 containerd[1558]: time="2024-07-02T09:26:08.645176720Z" level=warning msg="cleaning up after shim disconnected" id=0100c15092bd13d3c51d23e7e7268e33b8a090d30c6ae15ef1647784cae81c72 namespace=k8s.io Jul 2 09:26:08.645255 containerd[1558]: time="2024-07-02T09:26:08.645185920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:08.928160 kubelet[2668]: E0702 09:26:08.928136 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:08.930684 containerd[1558]: time="2024-07-02T09:26:08.930637005Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:26:08.940887 containerd[1558]: time="2024-07-02T09:26:08.940845258Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb7d1ff790ed91b92ac25386c670cd3662857ca0d95b99ab35a91936d2d71030\"" Jul 2 09:26:08.943064 containerd[1558]: time="2024-07-02T09:26:08.941778423Z" level=info msg="StartContainer for \"cb7d1ff790ed91b92ac25386c670cd3662857ca0d95b99ab35a91936d2d71030\"" Jul 2 09:26:08.985130 containerd[1558]: time="2024-07-02T09:26:08.985092648Z" level=info msg="StartContainer for \"cb7d1ff790ed91b92ac25386c670cd3662857ca0d95b99ab35a91936d2d71030\" returns successfully" Jul 2 09:26:09.007805 containerd[1558]: time="2024-07-02T09:26:09.007589124Z" level=info msg="shim disconnected" id=cb7d1ff790ed91b92ac25386c670cd3662857ca0d95b99ab35a91936d2d71030 namespace=k8s.io Jul 2 09:26:09.007805 containerd[1558]: time="2024-07-02T09:26:09.007719285Z" level=warning msg="cleaning up after shim disconnected" id=cb7d1ff790ed91b92ac25386c670cd3662857ca0d95b99ab35a91936d2d71030 namespace=k8s.io Jul 2 09:26:09.007805 containerd[1558]: time="2024-07-02T09:26:09.007729645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:09.585995 kubelet[2668]: I0702 09:26:09.585967 2668 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T09:26:09Z","lastTransitionTime":"2024-07-02T09:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 09:26:09.716751 kubelet[2668]: E0702 09:26:09.716474 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:09.932123 kubelet[2668]: E0702 09:26:09.932075 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:09.934949 containerd[1558]: time="2024-07-02T09:26:09.934601163Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:26:09.992797 containerd[1558]: time="2024-07-02T09:26:09.992670653Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff5de3413e2dd53cdf46f6dbb5a60e77cb1811c29a2ecbf170d97095c56e341e\"" Jul 2 09:26:09.993281 containerd[1558]: time="2024-07-02T09:26:09.993223696Z" level=info msg="StartContainer for \"ff5de3413e2dd53cdf46f6dbb5a60e77cb1811c29a2ecbf170d97095c56e341e\"" Jul 2 09:26:10.035511 containerd[1558]: time="2024-07-02T09:26:10.035406980Z" level=info msg="StartContainer for \"ff5de3413e2dd53cdf46f6dbb5a60e77cb1811c29a2ecbf170d97095c56e341e\" returns successfully" Jul 2 09:26:10.051515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff5de3413e2dd53cdf46f6dbb5a60e77cb1811c29a2ecbf170d97095c56e341e-rootfs.mount: Deactivated successfully. Jul 2 09:26:10.056688 containerd[1558]: time="2024-07-02T09:26:10.056627082Z" level=info msg="shim disconnected" id=ff5de3413e2dd53cdf46f6dbb5a60e77cb1811c29a2ecbf170d97095c56e341e namespace=k8s.io Jul 2 09:26:10.056688 containerd[1558]: time="2024-07-02T09:26:10.056688602Z" level=warning msg="cleaning up after shim disconnected" id=ff5de3413e2dd53cdf46f6dbb5a60e77cb1811c29a2ecbf170d97095c56e341e namespace=k8s.io Jul 2 09:26:10.056844 containerd[1558]: time="2024-07-02T09:26:10.056698563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:10.935547 kubelet[2668]: E0702 09:26:10.935500 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:10.938397 containerd[1558]: time="2024-07-02T09:26:10.938237603Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:26:10.998321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681616276.mount: Deactivated successfully. Jul 2 09:26:10.999409 containerd[1558]: time="2024-07-02T09:26:10.999362217Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13\"" Jul 2 09:26:10.999862 containerd[1558]: time="2024-07-02T09:26:10.999834939Z" level=info msg="StartContainer for \"adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13\"" Jul 2 09:26:11.020352 systemd[1]: run-containerd-runc-k8s.io-adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13-runc.VhYKqJ.mount: Deactivated successfully. Jul 2 09:26:11.041502 containerd[1558]: time="2024-07-02T09:26:11.041448411Z" level=info msg="StartContainer for \"adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13\" returns successfully" Jul 2 09:26:11.058280 containerd[1558]: time="2024-07-02T09:26:11.058216889Z" level=info msg="shim disconnected" id=adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13 namespace=k8s.io Jul 2 09:26:11.058280 containerd[1558]: time="2024-07-02T09:26:11.058268529Z" level=warning msg="cleaning up after shim disconnected" id=adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13 namespace=k8s.io Jul 2 09:26:11.058280 containerd[1558]: time="2024-07-02T09:26:11.058276489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:11.940085 kubelet[2668]: E0702 09:26:11.939912 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:11.947847 containerd[1558]: time="2024-07-02T09:26:11.947679400Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:26:11.962585 containerd[1558]: time="2024-07-02T09:26:11.962533069Z" level=info msg="CreateContainer within sandbox \"2fa47ad808f803cb7961eed45eacad48fea5db5e8e561f27944ad9b91f686b19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f0e03be6713c7bd460ea7d61bed7d2e1e4d2985d17984fd10882e4e05e5c7bb7\"" Jul 2 09:26:11.963019 containerd[1558]: time="2024-07-02T09:26:11.962991831Z" level=info msg="StartContainer for \"f0e03be6713c7bd460ea7d61bed7d2e1e4d2985d17984fd10882e4e05e5c7bb7\"" Jul 2 09:26:11.993848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adde9260d2219801568f42cfc194a9b234aa5b77984f28daf7c84bd91dec1c13-rootfs.mount: Deactivated successfully. Jul 2 09:26:12.007089 containerd[1558]: time="2024-07-02T09:26:12.007048873Z" level=info msg="StartContainer for \"f0e03be6713c7bd460ea7d61bed7d2e1e4d2985d17984fd10882e4e05e5c7bb7\" returns successfully" Jul 2 09:26:12.267061 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 09:26:12.945059 kubelet[2668]: E0702 09:26:12.944928 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:12.957194 kubelet[2668]: I0702 09:26:12.957125 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kf95f" podStartSLOduration=4.957088692 podCreationTimestamp="2024-07-02 09:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:12.95672857 +0000 UTC m=+85.340348914" watchObservedRunningTime="2024-07-02 09:26:12.957088692 +0000 UTC m=+85.340709036" Jul 2 09:26:14.487433 kubelet[2668]: E0702 09:26:14.487405 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:14.717109 kubelet[2668]: E0702 09:26:14.717064 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:15.082133 systemd-networkd[1242]: lxc_health: Link UP Jul 2 09:26:15.097289 systemd-networkd[1242]: lxc_health: Gained carrier Jul 2 09:26:16.487190 kubelet[2668]: E0702 09:26:16.486463 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:16.778892 systemd[1]: run-containerd-runc-k8s.io-f0e03be6713c7bd460ea7d61bed7d2e1e4d2985d17984fd10882e4e05e5c7bb7-runc.uTutSB.mount: Deactivated successfully. Jul 2 09:26:16.951385 kubelet[2668]: E0702 09:26:16.951343 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:17.004195 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jul 2 09:26:17.953201 kubelet[2668]: E0702 09:26:17.953067 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:21.053971 sshd[4491]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:21.057430 systemd[1]: sshd@25-10.0.0.144:22-10.0.0.1:43360.service: Deactivated successfully. Jul 2 09:26:21.059507 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:26:21.059623 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:26:21.060533 systemd-logind[1540]: Removed session 26.