Jul 2 09:25:33.908672 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:25:33.908692 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:25:33.908702 kernel: KASLR enabled Jul 2 09:25:33.908708 kernel: efi: EFI v2.7 by EDK II Jul 2 09:25:33.908713 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 09:25:33.908719 kernel: random: crng init done Jul 2 09:25:33.908726 kernel: ACPI: Early table checksum verification disabled Jul 2 09:25:33.908732 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 09:25:33.908738 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 09:25:33.908746 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908752 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908758 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908764 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908770 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908777 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908785 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908791 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908798 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:25:33.908804 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 09:25:33.908810 kernel: NUMA: Failed to initialise from firmware Jul 2 09:25:33.908816 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:25:33.908823 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 2 09:25:33.908829 kernel: Zone ranges: Jul 2 09:25:33.908835 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:25:33.908841 kernel: DMA32 empty Jul 2 09:25:33.908849 kernel: Normal empty Jul 2 09:25:33.908855 kernel: Movable zone start for each node Jul 2 09:25:33.908861 kernel: Early memory node ranges Jul 2 09:25:33.908868 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 09:25:33.908874 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 09:25:33.908880 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 09:25:33.908886 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 09:25:33.908893 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 09:25:33.908899 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 09:25:33.908905 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 09:25:33.908912 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:25:33.908918 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 09:25:33.908925 kernel: psci: probing for conduit method from ACPI. Jul 2 09:25:33.908932 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:25:33.908938 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:25:33.908947 kernel: psci: Trusted OS migration not required Jul 2 09:25:33.908954 kernel: psci: SMC Calling Convention v1.1 Jul 2 09:25:33.908961 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 09:25:33.908969 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:25:33.908975 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:25:33.908982 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 09:25:33.908989 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:25:33.908996 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:25:33.909002 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:25:33.909009 kernel: CPU features: detected: Spectre-v4 Jul 2 09:25:33.909016 kernel: CPU features: detected: Spectre-BHB Jul 2 09:25:33.909022 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:25:33.909029 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:25:33.909037 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:25:33.909044 kernel: alternatives: applying boot alternatives Jul 2 09:25:33.909051 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:25:33.909059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:25:33.909065 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:25:33.909072 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:25:33.909079 kernel: Fallback order for Node 0: 0 Jul 2 09:25:33.909085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 09:25:33.909092 kernel: Policy zone: DMA Jul 2 09:25:33.909099 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:25:33.909105 kernel: software IO TLB: area num 4. Jul 2 09:25:33.909113 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 09:25:33.909120 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jul 2 09:25:33.909127 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 09:25:33.909134 kernel: trace event string verifier disabled Jul 2 09:25:33.909140 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:25:33.909148 kernel: rcu: RCU event tracing is enabled. Jul 2 09:25:33.909154 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 09:25:33.909161 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:25:33.909168 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:25:33.909175 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:25:33.909182 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 09:25:33.909189 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:25:33.909196 kernel: GICv3: 256 SPIs implemented Jul 2 09:25:33.909203 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:25:33.909210 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:25:33.909216 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:25:33.909223 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 09:25:33.909230 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 09:25:33.909236 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 09:25:33.909243 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 09:25:33.909250 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 09:25:33.909257 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 09:25:33.909263 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:25:33.909271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:25:33.909278 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:25:33.909285 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:25:33.909292 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:25:33.909299 kernel: arm-pv: using stolen time PV Jul 2 09:25:33.909306 kernel: Console: colour dummy device 80x25 Jul 2 09:25:33.909313 kernel: ACPI: Core revision 20230628 Jul 2 09:25:33.909320 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:25:33.909327 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:25:33.909334 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:25:33.909342 kernel: SELinux: Initializing. Jul 2 09:25:33.909349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:25:33.909356 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:25:33.909362 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:25:33.909369 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:25:33.909376 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:25:33.909383 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:25:33.909418 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 09:25:33.909426 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 09:25:33.909442 kernel: Remapping and enabling EFI services. Jul 2 09:25:33.909449 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:25:33.909456 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:25:33.909463 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 09:25:33.909470 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 09:25:33.909476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:25:33.909483 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:25:33.909490 kernel: Detected PIPT I-cache on CPU2 Jul 2 09:25:33.909497 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 09:25:33.909504 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 09:25:33.909513 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:25:33.909520 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 09:25:33.909532 kernel: Detected PIPT I-cache on CPU3 Jul 2 09:25:33.909540 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 09:25:33.909548 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 09:25:33.909555 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:25:33.909562 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 09:25:33.909569 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 09:25:33.909576 kernel: SMP: Total of 4 processors activated. Jul 2 09:25:33.909585 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:25:33.909592 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:25:33.909600 kernel: CPU features: detected: Common not Private translations Jul 2 09:25:33.909607 kernel: CPU features: detected: CRC32 instructions Jul 2 09:25:33.909614 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 09:25:33.909621 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:25:33.909628 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:25:33.909635 kernel: CPU features: detected: Privileged Access Never Jul 2 09:25:33.909644 kernel: CPU features: detected: RAS Extension Support Jul 2 09:25:33.909651 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 09:25:33.909658 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:25:33.909665 kernel: alternatives: applying system-wide alternatives Jul 2 09:25:33.909673 kernel: devtmpfs: initialized Jul 2 09:25:33.909680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:25:33.909687 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 09:25:33.909694 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:25:33.909702 kernel: SMBIOS 3.0.0 present. Jul 2 09:25:33.909710 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 09:25:33.909718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:25:33.909725 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:25:33.909732 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:25:33.909740 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:25:33.909747 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:25:33.909754 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 2 09:25:33.909761 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:25:33.909768 kernel: cpuidle: using governor menu Jul 2 09:25:33.909777 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:25:33.909784 kernel: ASID allocator initialised with 32768 entries Jul 2 09:25:33.909791 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:25:33.909799 kernel: Serial: AMBA PL011 UART driver Jul 2 09:25:33.909806 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:25:33.909813 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:25:33.909820 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:25:33.909827 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:25:33.909835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:25:33.909843 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:25:33.909851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:25:33.909858 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:25:33.909865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:25:33.909872 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:25:33.909879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:25:33.909886 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:25:33.909893 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:25:33.909900 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:25:33.909909 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:25:33.909916 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:25:33.909923 kernel: ACPI: Interpreter enabled Jul 2 09:25:33.909930 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:25:33.909937 kernel: ACPI: MCFG table detected, 1 entries Jul 2 09:25:33.909945 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:25:33.909952 kernel: printk: console [ttyAMA0] enabled Jul 2 09:25:33.909959 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 09:25:33.910082 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 09:25:33.910157 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 09:25:33.910223 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 09:25:33.910286 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 09:25:33.910349 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 09:25:33.910358 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 09:25:33.910366 kernel: PCI host bridge to bus 0000:00 Jul 2 09:25:33.910506 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 09:25:33.910573 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 09:25:33.910632 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 09:25:33.910691 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 09:25:33.910773 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 09:25:33.910848 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 09:25:33.910915 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 09:25:33.910984 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 09:25:33.911049 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:25:33.911113 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:25:33.911178 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 09:25:33.911242 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 09:25:33.911300 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 09:25:33.911358 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 09:25:33.911447 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 09:25:33.911458 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 09:25:33.911466 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 09:25:33.911473 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 09:25:33.911480 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 09:25:33.911488 kernel: iommu: Default domain type: Translated Jul 2 09:25:33.911495 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:25:33.911502 kernel: efivars: Registered efivars operations Jul 2 09:25:33.911509 kernel: vgaarb: loaded Jul 2 09:25:33.911519 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:25:33.911527 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:25:33.911534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:25:33.911541 kernel: pnp: PnP ACPI init Jul 2 09:25:33.911618 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 09:25:33.911628 kernel: pnp: PnP ACPI: found 1 devices Jul 2 09:25:33.911636 kernel: NET: Registered PF_INET protocol family Jul 2 09:25:33.911643 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:25:33.911653 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:25:33.911661 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:25:33.911668 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:25:33.911675 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:25:33.911682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:25:33.911690 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:25:33.911697 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:25:33.911704 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:25:33.911711 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:25:33.911720 kernel: kvm [1]: HYP mode not available Jul 2 09:25:33.911728 kernel: Initialise system trusted keyrings Jul 2 09:25:33.911735 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:25:33.911742 kernel: Key type asymmetric registered Jul 2 09:25:33.911749 kernel: Asymmetric key parser 'x509' registered Jul 2 09:25:33.911757 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:25:33.911764 kernel: io scheduler mq-deadline registered Jul 2 09:25:33.911771 kernel: io scheduler kyber registered Jul 2 09:25:33.911778 kernel: io scheduler bfq registered Jul 2 09:25:33.911787 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 09:25:33.911794 kernel: ACPI: button: Power Button [PWRB] Jul 2 09:25:33.911802 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 09:25:33.911867 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 09:25:33.911877 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:25:33.911884 kernel: thunder_xcv, ver 1.0 Jul 2 09:25:33.911892 kernel: thunder_bgx, ver 1.0 Jul 2 09:25:33.911899 kernel: nicpf, ver 1.0 Jul 2 09:25:33.911906 kernel: nicvf, ver 1.0 Jul 2 09:25:33.911979 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:25:33.912044 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:25:33 UTC (1719912333) Jul 2 09:25:33.912055 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:25:33.912062 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 09:25:33.912070 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:25:33.912077 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:25:33.912085 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:25:33.912092 kernel: Segment Routing with IPv6 Jul 2 09:25:33.912101 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:25:33.912109 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:25:33.912116 kernel: Key type dns_resolver registered Jul 2 09:25:33.912123 kernel: registered taskstats version 1 Jul 2 09:25:33.912131 kernel: Loading compiled-in X.509 certificates Jul 2 09:25:33.912143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:25:33.912151 kernel: Key type .fscrypt registered Jul 2 09:25:33.912158 kernel: Key type fscrypt-provisioning registered Jul 2 09:25:33.912166 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:25:33.912175 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:25:33.912182 kernel: ima: No architecture policies found Jul 2 09:25:33.912190 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:25:33.912197 kernel: clk: Disabling unused clocks Jul 2 09:25:33.912207 kernel: Freeing unused kernel memory: 39040K Jul 2 09:25:33.912214 kernel: Run /init as init process Jul 2 09:25:33.912221 kernel: with arguments: Jul 2 09:25:33.912230 kernel: /init Jul 2 09:25:33.912240 kernel: with environment: Jul 2 09:25:33.912250 kernel: HOME=/ Jul 2 09:25:33.912258 kernel: TERM=linux Jul 2 09:25:33.912265 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:25:33.912274 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:25:33.912284 systemd[1]: Detected virtualization kvm. Jul 2 09:25:33.912292 systemd[1]: Detected architecture arm64. Jul 2 09:25:33.912301 systemd[1]: Running in initrd. Jul 2 09:25:33.912309 systemd[1]: No hostname configured, using default hostname. Jul 2 09:25:33.912318 systemd[1]: Hostname set to . Jul 2 09:25:33.912327 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:25:33.912335 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:25:33.912346 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:25:33.912355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:25:33.912367 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:25:33.912375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:25:33.912383 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:25:33.912402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:25:33.912411 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:25:33.912419 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:25:33.912427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:25:33.912441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:25:33.912449 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:25:33.912459 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:25:33.912467 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:25:33.912475 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:25:33.912483 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:25:33.912490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:25:33.912498 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:25:33.912506 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:25:33.912514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:25:33.912522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:25:33.912531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:25:33.912539 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:25:33.912547 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:25:33.912554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:25:33.912562 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:25:33.912570 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:25:33.912578 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:25:33.912585 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:25:33.912593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:25:33.912602 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:25:33.912610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:25:33.912618 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:25:33.912626 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:25:33.912636 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:25:33.912644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:25:33.912668 systemd-journald[237]: Collecting audit messages is disabled. Jul 2 09:25:33.912687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:25:33.912697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:25:33.912705 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:25:33.912714 systemd-journald[237]: Journal started Jul 2 09:25:33.912732 systemd-journald[237]: Runtime Journal (/run/log/journal/3ec8d75767394bce824443a51e8acc1f) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:25:33.901454 systemd-modules-load[238]: Inserted module 'overlay' Jul 2 09:25:33.917870 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 2 09:25:33.919427 kernel: Bridge firewalling registered Jul 2 09:25:33.919454 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:25:33.920647 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:25:33.923479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:25:33.926303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:25:33.927893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:25:33.929575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:25:33.932830 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:25:33.937965 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:25:33.939711 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:25:33.945768 dracut-cmdline[269]: dracut-dracut-053 Jul 2 09:25:33.948174 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:25:33.947555 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:25:33.973561 systemd-resolved[281]: Positive Trust Anchors: Jul 2 09:25:33.973577 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:25:33.973607 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:25:33.978111 systemd-resolved[281]: Defaulting to hostname 'linux'. Jul 2 09:25:33.979066 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:25:33.982364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:25:34.011410 kernel: SCSI subsystem initialized Jul 2 09:25:34.017403 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:25:34.026412 kernel: iscsi: registered transport (tcp) Jul 2 09:25:34.038423 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:25:34.038461 kernel: QLogic iSCSI HBA Driver Jul 2 09:25:34.079135 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:25:34.090596 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:25:34.109123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:25:34.109175 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:25:34.110144 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:25:34.157413 kernel: raid6: neonx8 gen() 15689 MB/s Jul 2 09:25:34.174417 kernel: raid6: neonx4 gen() 15455 MB/s Jul 2 09:25:34.191414 kernel: raid6: neonx2 gen() 13217 MB/s Jul 2 09:25:34.208412 kernel: raid6: neonx1 gen() 10437 MB/s Jul 2 09:25:34.225406 kernel: raid6: int64x8 gen() 6928 MB/s Jul 2 09:25:34.242406 kernel: raid6: int64x4 gen() 7290 MB/s Jul 2 09:25:34.259417 kernel: raid6: int64x2 gen() 6096 MB/s Jul 2 09:25:34.276493 kernel: raid6: int64x1 gen() 5040 MB/s Jul 2 09:25:34.276524 kernel: raid6: using algorithm neonx8 gen() 15689 MB/s Jul 2 09:25:34.297652 kernel: raid6: .... xor() 14296 MB/s, rmw enabled Jul 2 09:25:34.297670 kernel: raid6: using neon recovery algorithm Jul 2 09:25:34.303408 kernel: xor: measuring software checksum speed Jul 2 09:25:34.303428 kernel: 8regs : 19844 MB/sec Jul 2 09:25:34.304403 kernel: 32regs : 19697 MB/sec Jul 2 09:25:34.305848 kernel: arm64_neon : 27215 MB/sec Jul 2 09:25:34.305861 kernel: xor: using function: arm64_neon (27215 MB/sec) Jul 2 09:25:34.358406 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:25:34.368764 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:25:34.376547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:25:34.387569 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 2 09:25:34.390700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:25:34.393159 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:25:34.407171 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 2 09:25:34.432098 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:25:34.443515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:25:34.483455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:25:34.496594 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:25:34.509443 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:25:34.511168 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:25:34.512841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:25:34.515202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:25:34.522552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:25:34.535712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:25:34.539065 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 09:25:34.546585 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 09:25:34.546715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 09:25:34.546727 kernel: GPT:9289727 != 19775487 Jul 2 09:25:34.546737 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 09:25:34.546746 kernel: GPT:9289727 != 19775487 Jul 2 09:25:34.546754 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 09:25:34.546766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:25:34.547689 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:25:34.547811 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:25:34.552126 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:25:34.553181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:25:34.553329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:25:34.555523 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:25:34.566419 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (510) Jul 2 09:25:34.569775 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (505) Jul 2 09:25:34.570687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:25:34.582194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 09:25:34.586591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 09:25:34.588000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:25:34.598934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:25:34.603005 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 09:25:34.604215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 09:25:34.620562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:25:34.622331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:25:34.628011 disk-uuid[549]: Primary Header is updated. Jul 2 09:25:34.628011 disk-uuid[549]: Secondary Entries is updated. Jul 2 09:25:34.628011 disk-uuid[549]: Secondary Header is updated. Jul 2 09:25:34.631417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:25:34.651897 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:25:35.653640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:25:35.653690 disk-uuid[550]: The operation has completed successfully. Jul 2 09:25:35.683850 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:25:35.684944 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:25:35.708915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:25:35.711743 sh[568]: Success Jul 2 09:25:35.724416 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:25:35.763823 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:25:35.766944 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:25:35.769441 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:25:35.778652 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:25:35.778696 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:25:35.778707 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:25:35.780416 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:25:35.780450 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:25:35.784705 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:25:35.785712 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:25:35.794544 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:25:35.796772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:25:35.809105 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:25:35.809144 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:25:35.809154 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:25:35.814409 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:25:35.822836 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:25:35.824497 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:25:35.832114 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:25:35.840589 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:25:35.907219 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:25:35.920601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:25:35.947036 systemd-networkd[759]: lo: Link UP Jul 2 09:25:35.947050 systemd-networkd[759]: lo: Gained carrier Jul 2 09:25:35.948737 ignition[672]: Ignition 2.18.0 Jul 2 09:25:35.947733 systemd-networkd[759]: Enumeration completed Jul 2 09:25:35.948743 ignition[672]: Stage: fetch-offline Jul 2 09:25:35.947815 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:25:35.948777 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:25:35.948946 systemd[1]: Reached target network.target - Network. Jul 2 09:25:35.948785 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:25:35.950705 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:25:35.948870 ignition[672]: parsed url from cmdline: "" Jul 2 09:25:35.950708 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:25:35.948873 ignition[672]: no config URL provided Jul 2 09:25:35.951483 systemd-networkd[759]: eth0: Link UP Jul 2 09:25:35.948878 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:25:35.951488 systemd-networkd[759]: eth0: Gained carrier Jul 2 09:25:35.948885 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:25:35.951494 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:25:35.948909 ignition[672]: op(1): [started] loading QEMU firmware config module Jul 2 09:25:35.948914 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 09:25:35.960156 ignition[672]: op(1): [finished] loading QEMU firmware config module Jul 2 09:25:35.960178 ignition[672]: QEMU firmware config was not found. Ignoring... Jul 2 09:25:35.973426 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:25:36.004076 ignition[672]: parsing config with SHA512: 7297650d8442938c4b5acb09c5b37f8433aa8d6d8a467a503302b3b66b81b55967d6a9079cd8f08922e33b299ecf705932e4ebf20fd84267957569a5c7933d21 Jul 2 09:25:36.008606 unknown[672]: fetched base config from "system" Jul 2 09:25:36.008616 unknown[672]: fetched user config from "qemu" Jul 2 09:25:36.009052 ignition[672]: fetch-offline: fetch-offline passed Jul 2 09:25:36.010478 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:25:36.009109 ignition[672]: Ignition finished successfully Jul 2 09:25:36.012232 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 09:25:36.021598 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:25:36.032799 ignition[765]: Ignition 2.18.0 Jul 2 09:25:36.032810 ignition[765]: Stage: kargs Jul 2 09:25:36.032967 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:25:36.032976 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:25:36.033856 ignition[765]: kargs: kargs passed Jul 2 09:25:36.036518 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:25:36.033901 ignition[765]: Ignition finished successfully Jul 2 09:25:36.048615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:25:36.059187 ignition[775]: Ignition 2.18.0 Jul 2 09:25:36.059200 ignition[775]: Stage: disks Jul 2 09:25:36.060103 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:25:36.060113 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:25:36.062247 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:25:36.060990 ignition[775]: disks: disks passed Jul 2 09:25:36.063531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:25:36.061033 ignition[775]: Ignition finished successfully Jul 2 09:25:36.065290 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:25:36.066410 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:25:36.068042 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:25:36.069379 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:25:36.080537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:25:36.094140 systemd-fsck[786]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 09:25:36.098604 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:25:36.112533 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:25:36.163414 kernel: EXT4-fs (vda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:25:36.163383 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:25:36.164687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:25:36.174689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:25:36.177255 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:25:36.178275 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 09:25:36.178315 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:25:36.178339 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:25:36.184790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:25:36.187858 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:25:36.195470 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (794) Jul 2 09:25:36.198371 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:25:36.198426 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:25:36.198444 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:25:36.203415 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:25:36.205249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:25:36.249714 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:25:36.253354 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:25:36.257854 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:25:36.262784 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:25:36.340703 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:25:36.348555 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:25:36.351764 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:25:36.356435 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:25:36.376087 ignition[908]: INFO : Ignition 2.18.0 Jul 2 09:25:36.376087 ignition[908]: INFO : Stage: mount Jul 2 09:25:36.376087 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:25:36.376087 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:25:36.376087 ignition[908]: INFO : mount: mount passed Jul 2 09:25:36.376087 ignition[908]: INFO : Ignition finished successfully Jul 2 09:25:36.379670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:25:36.394508 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:25:36.395576 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:25:36.777771 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:25:36.788711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:25:36.801425 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Jul 2 09:25:36.803646 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:25:36.803683 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:25:36.803694 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:25:36.810452 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:25:36.811803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:25:36.838138 ignition[940]: INFO : Ignition 2.18.0 Jul 2 09:25:36.840149 ignition[940]: INFO : Stage: files Jul 2 09:25:36.840149 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:25:36.840149 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:25:36.840149 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:25:36.844026 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:25:36.844026 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:25:36.847352 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:25:36.848679 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:25:36.848679 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:25:36.848031 unknown[940]: wrote ssh authorized keys file for user: core Jul 2 09:25:36.852193 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:25:36.852193 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:25:36.889024 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 09:25:36.927768 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:25:36.927768 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 09:25:36.932070 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 09:25:37.266418 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 09:25:37.333370 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 09:25:37.333370 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 09:25:37.336752 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 09:25:37.570536 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 09:25:37.797973 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 09:25:37.797973 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 09:25:37.802865 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 09:25:37.826889 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:25:37.828524 systemd-networkd[759]: eth0: Gained IPv6LL Jul 2 09:25:37.831026 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:25:37.832724 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 09:25:37.832724 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:25:37.832724 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:25:37.832724 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:25:37.832724 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:25:37.832724 ignition[940]: INFO : files: files passed Jul 2 09:25:37.832724 ignition[940]: INFO : Ignition finished successfully Jul 2 09:25:37.833773 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:25:37.841803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:25:37.844045 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:25:37.845592 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:25:37.845670 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:25:37.852464 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 09:25:37.857420 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:25:37.857420 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:25:37.860467 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:25:37.862642 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:25:37.864178 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:25:37.872581 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:25:37.892560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:25:37.892667 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:25:37.894729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:25:37.896460 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:25:37.898201 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:25:37.898939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:25:37.914759 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:25:37.922594 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:25:37.931782 systemd[1]: Stopped target network.target - Network. Jul 2 09:25:37.932720 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:25:37.934424 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:25:37.936400 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:25:37.938089 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:25:37.938202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:25:37.940583 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:25:37.942478 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:25:37.944034 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:25:37.945637 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:25:37.947490 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:25:37.949374 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:25:37.951145 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:25:37.952994 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:25:37.954861 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:25:37.956528 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:25:37.958053 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:25:37.958162 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:25:37.960332 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:25:37.962218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:25:37.964084 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:25:37.964214 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:25:37.966086 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:25:37.966195 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:25:37.968843 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:25:37.968959 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:25:37.970775 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:25:37.972296 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:25:37.975529 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:25:37.977083 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:25:37.979035 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:25:37.980511 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:25:37.980596 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:25:37.982053 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:25:37.982134 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:25:37.983575 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:25:37.983681 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:25:37.985320 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:25:37.985443 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:25:37.998737 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:25:38.000251 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:25:38.001340 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:25:38.003171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:25:38.004835 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:25:38.004968 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:25:38.006496 systemd-networkd[759]: eth0: DHCPv6 lease lost Jul 2 09:25:38.007044 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:25:38.007143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:25:38.013052 ignition[994]: INFO : Ignition 2.18.0 Jul 2 09:25:38.013052 ignition[994]: INFO : Stage: umount Jul 2 09:25:38.013052 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:25:38.013052 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:25:38.019682 ignition[994]: INFO : umount: umount passed Jul 2 09:25:38.019682 ignition[994]: INFO : Ignition finished successfully Jul 2 09:25:38.013749 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:25:38.013857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:25:38.016789 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:25:38.016906 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:25:38.021268 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:25:38.021880 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:25:38.021960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:25:38.026480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:25:38.026645 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:25:38.028199 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:25:38.028301 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:25:38.030055 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:25:38.030103 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:25:38.031808 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:25:38.031853 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:25:38.033510 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:25:38.033558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:25:38.049544 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:25:38.050438 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:25:38.050520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:25:38.052563 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:25:38.052620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:25:38.054325 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:25:38.054382 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:25:38.056418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:25:38.056485 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:25:38.058566 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:25:38.060709 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:25:38.061632 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:25:38.072638 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:25:38.072771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:25:38.079296 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:25:38.079497 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:25:38.081724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:25:38.081767 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:25:38.083617 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:25:38.083652 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:25:38.085481 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:25:38.085541 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:25:38.087696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:25:38.087760 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:25:38.090267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:25:38.090320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:25:38.100604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:25:38.101619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:25:38.101702 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:25:38.104626 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 09:25:38.104688 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:25:38.106620 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:25:38.106685 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:25:38.108696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:25:38.108757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:25:38.110941 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:25:38.111043 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:25:38.112752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:25:38.114454 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:25:38.116189 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:25:38.117462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:25:38.117547 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:25:38.133593 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:25:38.140787 systemd[1]: Switching root. Jul 2 09:25:38.165288 systemd-journald[237]: Journal stopped Jul 2 09:25:38.901308 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 2 09:25:38.901366 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:25:38.901383 kernel: SELinux: policy capability open_perms=1 Jul 2 09:25:38.901420 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:25:38.901439 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:25:38.901454 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:25:38.901463 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:25:38.901472 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:25:38.901482 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:25:38.901491 kernel: audit: type=1403 audit(1719912338.331:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 09:25:38.901506 systemd[1]: Successfully loaded SELinux policy in 32.463ms. Jul 2 09:25:38.901523 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.255ms. Jul 2 09:25:38.901535 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:25:38.901546 systemd[1]: Detected virtualization kvm. Jul 2 09:25:38.901562 systemd[1]: Detected architecture arm64. Jul 2 09:25:38.901573 systemd[1]: Detected first boot. Jul 2 09:25:38.901583 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:25:38.901594 zram_generator::config[1038]: No configuration found. Jul 2 09:25:38.901605 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:25:38.901615 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 09:25:38.901625 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 09:25:38.901636 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 09:25:38.901649 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 09:25:38.901660 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 09:25:38.901671 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 09:25:38.901681 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 09:25:38.901691 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 09:25:38.901702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 09:25:38.901712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 09:25:38.901723 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 09:25:38.901735 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:25:38.901746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:25:38.901757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 09:25:38.901767 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 09:25:38.901778 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 09:25:38.901790 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:25:38.901801 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 09:25:38.901811 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:25:38.901821 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 09:25:38.901833 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 09:25:38.901845 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 09:25:38.901855 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 09:25:38.901865 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:25:38.901876 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:25:38.901887 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:25:38.901897 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:25:38.901907 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 09:25:38.901919 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 09:25:38.901930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:25:38.901940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:25:38.901951 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:25:38.901961 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 09:25:38.901972 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 09:25:38.901983 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 09:25:38.901994 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 09:25:38.902004 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 09:25:38.902018 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 09:25:38.902029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 09:25:38.902040 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:25:38.902050 systemd[1]: Reached target machines.target - Containers. Jul 2 09:25:38.902060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 09:25:38.902071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:25:38.902081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:25:38.902092 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 09:25:38.902104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:25:38.902114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:25:38.902124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:25:38.902135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 09:25:38.902145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:25:38.902156 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:25:38.902167 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 09:25:38.902177 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 09:25:38.902187 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 09:25:38.902199 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 09:25:38.902209 kernel: fuse: init (API version 7.39) Jul 2 09:25:38.902219 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:25:38.902229 kernel: loop: module loaded Jul 2 09:25:38.902239 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:25:38.902250 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 09:25:38.902260 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 09:25:38.902270 kernel: ACPI: bus type drm_connector registered Jul 2 09:25:38.902296 systemd-journald[1101]: Collecting audit messages is disabled. Jul 2 09:25:38.902320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:25:38.902332 systemd-journald[1101]: Journal started Jul 2 09:25:38.904148 systemd-journald[1101]: Runtime Journal (/run/log/journal/3ec8d75767394bce824443a51e8acc1f) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:25:38.904194 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 09:25:38.904220 systemd[1]: Stopped verity-setup.service. Jul 2 09:25:38.700668 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:25:38.726228 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 09:25:38.726638 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 09:25:38.909451 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:25:38.908946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 09:25:38.910103 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 09:25:38.911273 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 09:25:38.912366 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 09:25:38.913594 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 09:25:38.914879 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 09:25:38.916202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:25:38.917675 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:25:38.917820 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 09:25:38.919247 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 09:25:38.920714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:25:38.920862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:25:38.922235 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:25:38.922373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:25:38.923728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:25:38.923855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:25:38.925319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:25:38.925508 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 09:25:38.926728 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:25:38.926865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:25:38.928214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:25:38.929582 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 09:25:38.931047 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 09:25:38.942981 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 09:25:38.948498 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 09:25:38.951577 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 09:25:38.952682 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:25:38.952723 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:25:38.957484 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 09:25:38.959722 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 09:25:38.961825 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 09:25:38.962901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:25:38.964374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 09:25:38.966307 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 09:25:38.967539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:25:38.971562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 09:25:38.974475 systemd-journald[1101]: Time spent on flushing to /var/log/journal/3ec8d75767394bce824443a51e8acc1f is 23.038ms for 856 entries. Jul 2 09:25:38.974475 systemd-journald[1101]: System Journal (/var/log/journal/3ec8d75767394bce824443a51e8acc1f) is 8.0M, max 195.6M, 187.6M free. Jul 2 09:25:39.006441 systemd-journald[1101]: Received client request to flush runtime journal. Jul 2 09:25:39.006483 kernel: loop0: detected capacity change from 0 to 194096 Jul 2 09:25:39.006497 kernel: block loop0: the capability attribute has been deprecated. Jul 2 09:25:39.006654 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:25:38.972652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:25:38.974125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:25:38.977920 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 09:25:38.980580 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:25:38.986001 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:25:38.987535 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 09:25:38.988947 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 09:25:38.991069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 09:25:39.002642 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 09:25:39.004130 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 09:25:39.008036 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 09:25:39.019855 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 09:25:39.022218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 09:25:39.023880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:25:39.032467 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 09:25:39.039415 kernel: loop1: detected capacity change from 0 to 113672 Jul 2 09:25:39.044711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:25:39.045717 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 09:25:39.056879 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jul 2 09:25:39.056896 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jul 2 09:25:39.061536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:25:39.065420 kernel: loop2: detected capacity change from 0 to 59672 Jul 2 09:25:39.074585 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 09:25:39.098625 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 09:25:39.102516 kernel: loop3: detected capacity change from 0 to 194096 Jul 2 09:25:39.105667 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:25:39.116422 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 09:25:39.117778 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 2 09:25:39.117797 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 2 09:25:39.121865 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 09:25:39.121669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:25:39.127024 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 09:25:39.127512 (sd-merge)[1173]: Merged extensions into '/usr'. Jul 2 09:25:39.130775 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 09:25:39.130791 systemd[1]: Reloading... Jul 2 09:25:39.170692 zram_generator::config[1200]: No configuration found. Jul 2 09:25:39.254932 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:25:39.276311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:25:39.313931 systemd[1]: Reloading finished in 182 ms. Jul 2 09:25:39.342698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 09:25:39.344162 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 09:25:39.359668 systemd[1]: Starting ensure-sysext.service... Jul 2 09:25:39.361620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:25:39.368266 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Jul 2 09:25:39.368281 systemd[1]: Reloading... Jul 2 09:25:39.379242 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:25:39.379846 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 09:25:39.380761 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:25:39.381108 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jul 2 09:25:39.381935 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jul 2 09:25:39.384317 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:25:39.384470 systemd-tmpfiles[1237]: Skipping /boot Jul 2 09:25:39.391552 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:25:39.391660 systemd-tmpfiles[1237]: Skipping /boot Jul 2 09:25:39.422458 zram_generator::config[1262]: No configuration found. Jul 2 09:25:39.496346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:25:39.533190 systemd[1]: Reloading finished in 164 ms. Jul 2 09:25:39.548343 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 09:25:39.556863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:25:39.564742 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:25:39.567227 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 09:25:39.569717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 09:25:39.573733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:25:39.579172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:25:39.582284 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 09:25:39.585222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:25:39.586298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:25:39.590681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:25:39.595752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:25:39.597095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:25:39.598163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:25:39.598336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:25:39.601953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:25:39.602078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:25:39.604875 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 09:25:39.606898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:25:39.607027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:25:39.614902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:25:39.624572 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Jul 2 09:25:39.624738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:25:39.627024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:25:39.629590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:25:39.630760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:25:39.634757 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 09:25:39.640750 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 09:25:39.644040 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 09:25:39.646136 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:25:39.650171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:25:39.650328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:25:39.652015 augenrules[1331]: No rules Jul 2 09:25:39.651995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:25:39.652127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:25:39.653978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:25:39.655674 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:25:39.655803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:25:39.658085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 09:25:39.662757 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 09:25:39.681215 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 09:25:39.688807 systemd[1]: Finished ensure-sysext.service. Jul 2 09:25:39.691898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:25:39.692728 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1354) Jul 2 09:25:39.705713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:25:39.710940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:25:39.713939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:25:39.718819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:25:39.721561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:25:39.724594 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:25:39.729520 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 09:25:39.730978 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:25:39.732168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:25:39.732339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:25:39.740093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:25:39.740245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:25:39.745802 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:25:39.748191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:25:39.748407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1337) Jul 2 09:25:39.753271 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 09:25:39.758883 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:25:39.759030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:25:39.770932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:25:39.771000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:25:39.797659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:25:39.809148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:25:39.821372 systemd-resolved[1303]: Positive Trust Anchors: Jul 2 09:25:39.824650 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:25:39.824684 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 09:25:39.824684 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:25:39.825966 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 09:25:39.828701 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 09:25:39.830774 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 09:25:39.832311 systemd-resolved[1303]: Defaulting to hostname 'linux'. Jul 2 09:25:39.834663 systemd-networkd[1374]: lo: Link UP Jul 2 09:25:39.834898 systemd-networkd[1374]: lo: Gained carrier Jul 2 09:25:39.835634 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 09:25:39.835793 systemd-networkd[1374]: Enumeration completed Jul 2 09:25:39.836553 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:25:39.836639 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:25:39.837353 systemd-networkd[1374]: eth0: Link UP Jul 2 09:25:39.837473 systemd-networkd[1374]: eth0: Gained carrier Jul 2 09:25:39.837540 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:25:39.838759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:25:39.840059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:25:39.841765 systemd[1]: Reached target network.target - Network. Jul 2 09:25:39.842929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:25:39.845978 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 09:25:39.849834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 09:25:39.851612 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:25:39.858488 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:25:39.859248 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Jul 2 09:25:39.860165 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 09:25:39.860225 systemd-timesyncd[1376]: Initial clock synchronization to Tue 2024-07-02 09:25:39.896583 UTC. Jul 2 09:25:39.867997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:25:39.880877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 09:25:39.882310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:25:39.883522 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:25:39.884636 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 09:25:39.885845 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 09:25:39.887234 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 09:25:39.888574 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 09:25:39.889780 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 09:25:39.891018 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:25:39.891057 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:25:39.891961 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:25:39.893670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 09:25:39.896053 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 09:25:39.906442 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 09:25:39.908622 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 09:25:39.910135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 09:25:39.911330 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:25:39.912279 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:25:39.913243 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:25:39.913275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:25:39.914202 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 09:25:39.917486 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:25:39.916246 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 09:25:39.920137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 09:25:39.924161 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 09:25:39.925265 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 09:25:39.927566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 09:25:39.931871 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 09:25:39.935193 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 09:25:39.936476 jq[1406]: false Jul 2 09:25:39.938704 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 09:25:39.948585 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 09:25:39.949421 extend-filesystems[1407]: Found loop3 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found loop4 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found loop5 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found vda Jul 2 09:25:39.949421 extend-filesystems[1407]: Found vda1 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found vda2 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found vda3 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found usr Jul 2 09:25:39.949421 extend-filesystems[1407]: Found vda4 Jul 2 09:25:39.949421 extend-filesystems[1407]: Found vda6 Jul 2 09:25:39.960162 extend-filesystems[1407]: Found vda7 Jul 2 09:25:39.960162 extend-filesystems[1407]: Found vda9 Jul 2 09:25:39.960162 extend-filesystems[1407]: Checking size of /dev/vda9 Jul 2 09:25:39.956459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 09:25:39.967008 extend-filesystems[1407]: Resized partition /dev/vda9 Jul 2 09:25:39.956866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 09:25:39.958298 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 09:25:39.964269 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 09:25:39.968406 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 09:25:39.971786 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:25:39.974846 dbus-daemon[1405]: [system] SELinux support is enabled Jul 2 09:25:39.976257 extend-filesystems[1428]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 09:25:39.983496 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1338) Jul 2 09:25:39.983538 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 09:25:39.971932 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 09:25:39.983604 jq[1425]: true Jul 2 09:25:39.972178 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:25:39.972301 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 09:25:39.974845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:25:39.974974 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 09:25:39.976397 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 09:25:40.000248 jq[1431]: true Jul 2 09:25:40.018081 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 09:25:40.018622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:25:40.018671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 09:25:40.020670 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:25:40.020720 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 09:25:40.026205 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 09:25:40.026582 tar[1430]: linux-arm64/helm Jul 2 09:25:40.027231 systemd-logind[1418]: New seat seat0. Jul 2 09:25:40.028569 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 09:25:40.034995 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 09:25:40.038310 update_engine[1424]: I0702 09:25:40.036731 1424 main.cc:92] Flatcar Update Engine starting Jul 2 09:25:40.048324 update_engine[1424]: I0702 09:25:40.047621 1424 update_check_scheduler.cc:74] Next update check in 4m3s Jul 2 09:25:40.043459 systemd[1]: Started update-engine.service - Update Engine. Jul 2 09:25:40.049304 extend-filesystems[1428]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 09:25:40.049304 extend-filesystems[1428]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 09:25:40.049304 extend-filesystems[1428]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 09:25:40.055366 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Jul 2 09:25:40.056636 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 09:25:40.060919 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:25:40.061100 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 09:25:40.081349 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:25:40.082296 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 09:25:40.084877 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 09:25:40.106217 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:25:40.206085 containerd[1432]: time="2024-07-02T09:25:40.205995705Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 09:25:40.236162 containerd[1432]: time="2024-07-02T09:25:40.236118313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 09:25:40.236162 containerd[1432]: time="2024-07-02T09:25:40.236161602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237485 containerd[1432]: time="2024-07-02T09:25:40.237449149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237485 containerd[1432]: time="2024-07-02T09:25:40.237481947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237723 containerd[1432]: time="2024-07-02T09:25:40.237685699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237723 containerd[1432]: time="2024-07-02T09:25:40.237721019Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:25:40.237843 containerd[1432]: time="2024-07-02T09:25:40.237799709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237874 containerd[1432]: time="2024-07-02T09:25:40.237852049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237874 containerd[1432]: time="2024-07-02T09:25:40.237864302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.237942 containerd[1432]: time="2024-07-02T09:25:40.237924731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.238135 containerd[1432]: time="2024-07-02T09:25:40.238113707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.238164 containerd[1432]: time="2024-07-02T09:25:40.238138575Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:25:40.238164 containerd[1432]: time="2024-07-02T09:25:40.238148586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:25:40.238266 containerd[1432]: time="2024-07-02T09:25:40.238241572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:25:40.238266 containerd[1432]: time="2024-07-02T09:25:40.238261555Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:25:40.238331 containerd[1432]: time="2024-07-02T09:25:40.238313775Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:25:40.238352 containerd[1432]: time="2024-07-02T09:25:40.238330354Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:25:40.241401 containerd[1432]: time="2024-07-02T09:25:40.241354960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:25:40.241401 containerd[1432]: time="2024-07-02T09:25:40.241398450Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:25:40.241579 containerd[1432]: time="2024-07-02T09:25:40.241411785Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:25:40.241579 containerd[1432]: time="2024-07-02T09:25:40.241443341Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 09:25:40.241579 containerd[1432]: time="2024-07-02T09:25:40.241457677Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 09:25:40.241579 containerd[1432]: time="2024-07-02T09:25:40.241467529Z" level=info msg="NRI interface is disabled by configuration." Jul 2 09:25:40.241579 containerd[1432]: time="2024-07-02T09:25:40.241478461Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:25:40.241667 containerd[1432]: time="2024-07-02T09:25:40.241609971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 09:25:40.241667 containerd[1432]: time="2024-07-02T09:25:40.241626670Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 09:25:40.241667 containerd[1432]: time="2024-07-02T09:25:40.241641167Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 09:25:40.241667 containerd[1432]: time="2024-07-02T09:25:40.241655743Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241668318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241683575Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241696189Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241717013Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241735754Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241750651Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241762385Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.241773 containerd[1432]: time="2024-07-02T09:25:40.241773638Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:25:40.241908 containerd[1432]: time="2024-07-02T09:25:40.241865222Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:25:40.242165 containerd[1432]: time="2024-07-02T09:25:40.242147624Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:25:40.242223 containerd[1432]: time="2024-07-02T09:25:40.242177578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242223 containerd[1432]: time="2024-07-02T09:25:40.242191554Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 09:25:40.242223 containerd[1432]: time="2024-07-02T09:25:40.242211737Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:25:40.242359 containerd[1432]: time="2024-07-02T09:25:40.242318018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242359 containerd[1432]: time="2024-07-02T09:25:40.242333836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242359 containerd[1432]: time="2024-07-02T09:25:40.242345489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242359 containerd[1432]: time="2024-07-02T09:25:40.242356702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242544 containerd[1432]: time="2024-07-02T09:25:40.242368836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242544 containerd[1432]: time="2024-07-02T09:25:40.242381050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242544 containerd[1432]: time="2024-07-02T09:25:40.242411084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242544 containerd[1432]: time="2024-07-02T09:25:40.242423418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242544 containerd[1432]: time="2024-07-02T09:25:40.242437714Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:25:40.242642 containerd[1432]: time="2024-07-02T09:25:40.242560014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242642 containerd[1432]: time="2024-07-02T09:25:40.242578034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242642 containerd[1432]: time="2024-07-02T09:25:40.242589688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242642 containerd[1432]: time="2024-07-02T09:25:40.242602382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242642 containerd[1432]: time="2024-07-02T09:25:40.242619361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242642 containerd[1432]: time="2024-07-02T09:25:40.242632737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242757 containerd[1432]: time="2024-07-02T09:25:40.242644710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.242757 containerd[1432]: time="2024-07-02T09:25:40.242656043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:25:40.243011 containerd[1432]: time="2024-07-02T09:25:40.242946254Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:25:40.243011 containerd[1432]: time="2024-07-02T09:25:40.243010687Z" level=info msg="Connect containerd service" Jul 2 09:25:40.243167 containerd[1432]: time="2024-07-02T09:25:40.243036196Z" level=info msg="using legacy CRI server" Jul 2 09:25:40.243167 containerd[1432]: time="2024-07-02T09:25:40.243042764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 09:25:40.243204 containerd[1432]: time="2024-07-02T09:25:40.243173553Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:25:40.243855 containerd[1432]: time="2024-07-02T09:25:40.243822653Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:25:40.243913 containerd[1432]: time="2024-07-02T09:25:40.243874231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:25:40.243913 containerd[1432]: time="2024-07-02T09:25:40.243893734Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 09:25:40.243913 containerd[1432]: time="2024-07-02T09:25:40.243908831Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:25:40.243982 containerd[1432]: time="2024-07-02T09:25:40.243920484Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 09:25:40.244559 containerd[1432]: time="2024-07-02T09:25:40.244305002Z" level=info msg="Start subscribing containerd event" Jul 2 09:25:40.244559 containerd[1432]: time="2024-07-02T09:25:40.244440997Z" level=info msg="Start recovering state" Jul 2 09:25:40.244559 containerd[1432]: time="2024-07-02T09:25:40.244473314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:25:40.244559 containerd[1432]: time="2024-07-02T09:25:40.244521129Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:25:40.244915 containerd[1432]: time="2024-07-02T09:25:40.244890469Z" level=info msg="Start event monitor" Jul 2 09:25:40.244991 containerd[1432]: time="2024-07-02T09:25:40.244978370Z" level=info msg="Start snapshots syncer" Jul 2 09:25:40.245106 containerd[1432]: time="2024-07-02T09:25:40.245089376Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:25:40.245161 containerd[1432]: time="2024-07-02T09:25:40.245149365Z" level=info msg="Start streaming server" Jul 2 09:25:40.245867 containerd[1432]: time="2024-07-02T09:25:40.245841033Z" level=info msg="containerd successfully booted in 0.040914s" Jul 2 09:25:40.245883 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 09:25:40.372373 tar[1430]: linux-arm64/LICENSE Jul 2 09:25:40.372373 tar[1430]: linux-arm64/README.md Jul 2 09:25:40.386460 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 09:25:40.963681 systemd-networkd[1374]: eth0: Gained IPv6LL Jul 2 09:25:40.966309 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 09:25:40.968324 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 09:25:40.978128 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 09:25:40.980607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:25:40.983063 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 09:25:41.003554 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 09:25:41.006536 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 09:25:41.007482 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 09:25:41.009747 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 09:25:41.451762 sshd_keygen[1422]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:25:41.470324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 09:25:41.478708 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 09:25:41.485057 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:25:41.485262 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 09:25:41.488559 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 09:25:41.502926 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 09:25:41.505046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:25:41.508466 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 09:25:41.508991 (kubelet)[1515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:25:41.510577 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 09:25:41.511977 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 09:25:41.513056 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 09:25:41.514964 systemd[1]: Startup finished in 543ms (kernel) + 4.625s (initrd) + 3.220s (userspace) = 8.390s. Jul 2 09:25:41.978609 kubelet[1515]: E0702 09:25:41.978551 1515 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:25:41.980702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:25:41.980835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:25:46.610068 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 09:25:46.611179 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:49080.service - OpenSSH per-connection server daemon (10.0.0.1:49080). Jul 2 09:25:46.670224 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 49080 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:46.672043 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:46.685109 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 09:25:46.693634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 09:25:46.695944 systemd-logind[1418]: New session 1 of user core. Jul 2 09:25:46.702734 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 09:25:46.704904 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 09:25:46.711153 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:46.808128 systemd[1536]: Queued start job for default target default.target. Jul 2 09:25:46.816342 systemd[1536]: Created slice app.slice - User Application Slice. Jul 2 09:25:46.816374 systemd[1536]: Reached target paths.target - Paths. Jul 2 09:25:46.816407 systemd[1536]: Reached target timers.target - Timers. Jul 2 09:25:46.817660 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 09:25:46.827736 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 09:25:46.827802 systemd[1536]: Reached target sockets.target - Sockets. Jul 2 09:25:46.827814 systemd[1536]: Reached target basic.target - Basic System. Jul 2 09:25:46.827850 systemd[1536]: Reached target default.target - Main User Target. Jul 2 09:25:46.827875 systemd[1536]: Startup finished in 111ms. Jul 2 09:25:46.828124 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 09:25:46.829538 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 09:25:46.890988 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:49086.service - OpenSSH per-connection server daemon (10.0.0.1:49086). Jul 2 09:25:46.926016 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 49086 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:46.927220 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:46.931031 systemd-logind[1418]: New session 2 of user core. Jul 2 09:25:46.941766 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 09:25:46.994534 sshd[1547]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:47.008007 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:49086.service: Deactivated successfully. Jul 2 09:25:47.011435 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 09:25:47.012599 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Jul 2 09:25:47.013658 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:49098.service - OpenSSH per-connection server daemon (10.0.0.1:49098). Jul 2 09:25:47.015817 systemd-logind[1418]: Removed session 2. Jul 2 09:25:47.046439 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 49098 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:47.047207 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:47.050644 systemd-logind[1418]: New session 3 of user core. Jul 2 09:25:47.065603 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 09:25:47.112423 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:47.120596 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:49098.service: Deactivated successfully. Jul 2 09:25:47.121945 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 09:25:47.122522 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Jul 2 09:25:47.124105 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:49108.service - OpenSSH per-connection server daemon (10.0.0.1:49108). Jul 2 09:25:47.125854 systemd-logind[1418]: Removed session 3. Jul 2 09:25:47.155507 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 49108 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:47.156637 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:47.159863 systemd-logind[1418]: New session 4 of user core. Jul 2 09:25:47.172549 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 09:25:47.224379 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:47.239573 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:49108.service: Deactivated successfully. Jul 2 09:25:47.240989 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:25:47.242204 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:25:47.243235 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:49116.service - OpenSSH per-connection server daemon (10.0.0.1:49116). Jul 2 09:25:47.244703 systemd-logind[1418]: Removed session 4. Jul 2 09:25:47.275252 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 49116 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:47.276428 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:47.281231 systemd-logind[1418]: New session 5 of user core. Jul 2 09:25:47.290543 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 09:25:47.347973 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 09:25:47.348224 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:25:47.364110 sudo[1571]: pam_unix(sudo:session): session closed for user root Jul 2 09:25:47.365726 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:47.375657 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:49116.service: Deactivated successfully. Jul 2 09:25:47.377070 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:25:47.378249 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:25:47.379483 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:49126.service - OpenSSH per-connection server daemon (10.0.0.1:49126). Jul 2 09:25:47.380708 systemd-logind[1418]: Removed session 5. Jul 2 09:25:47.412644 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 49126 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:47.412575 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:47.416606 systemd-logind[1418]: New session 6 of user core. Jul 2 09:25:47.422523 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 09:25:47.472119 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 09:25:47.472360 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:25:47.474956 sudo[1580]: pam_unix(sudo:session): session closed for user root Jul 2 09:25:47.479194 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 09:25:47.479443 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:25:47.495617 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 09:25:47.496765 auditctl[1583]: No rules Jul 2 09:25:47.497076 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 09:25:47.497243 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 09:25:47.499252 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:25:47.521132 augenrules[1601]: No rules Jul 2 09:25:47.524436 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:25:47.525328 sudo[1579]: pam_unix(sudo:session): session closed for user root Jul 2 09:25:47.526732 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 2 09:25:47.536635 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:49126.service: Deactivated successfully. Jul 2 09:25:47.537994 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:25:47.539159 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:25:47.540195 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:49140.service - OpenSSH per-connection server daemon (10.0.0.1:49140). Jul 2 09:25:47.540860 systemd-logind[1418]: Removed session 6. Jul 2 09:25:47.570970 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 49140 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:25:47.572097 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:25:47.575382 systemd-logind[1418]: New session 7 of user core. Jul 2 09:25:47.583519 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 09:25:47.633054 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:25:47.633299 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:25:47.737701 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 09:25:47.737835 (dockerd)[1622]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 09:25:47.971341 dockerd[1622]: time="2024-07-02T09:25:47.971273464Z" level=info msg="Starting up" Jul 2 09:25:48.062919 dockerd[1622]: time="2024-07-02T09:25:48.062649657Z" level=info msg="Loading containers: start." Jul 2 09:25:48.138415 kernel: Initializing XFRM netlink socket Jul 2 09:25:48.198970 systemd-networkd[1374]: docker0: Link UP Jul 2 09:25:48.207750 dockerd[1622]: time="2024-07-02T09:25:48.207703355Z" level=info msg="Loading containers: done." Jul 2 09:25:48.263673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck583064227-merged.mount: Deactivated successfully. Jul 2 09:25:48.265108 dockerd[1622]: time="2024-07-02T09:25:48.265068558Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 09:25:48.265279 dockerd[1622]: time="2024-07-02T09:25:48.265258326Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 09:25:48.265417 dockerd[1622]: time="2024-07-02T09:25:48.265377351Z" level=info msg="Daemon has completed initialization" Jul 2 09:25:48.291513 dockerd[1622]: time="2024-07-02T09:25:48.291346674Z" level=info msg="API listen on /run/docker.sock" Jul 2 09:25:48.291597 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 09:25:48.793850 containerd[1432]: time="2024-07-02T09:25:48.793760395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 09:25:49.402901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561360101.mount: Deactivated successfully. Jul 2 09:25:50.643708 containerd[1432]: time="2024-07-02T09:25:50.643642141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:50.644144 containerd[1432]: time="2024-07-02T09:25:50.644096398Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940432" Jul 2 09:25:50.644973 containerd[1432]: time="2024-07-02T09:25:50.644936936Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:50.648315 containerd[1432]: time="2024-07-02T09:25:50.648270542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:50.649443 containerd[1432]: time="2024-07-02T09:25:50.649407285Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 1.855534352s" Jul 2 09:25:50.649484 containerd[1432]: time="2024-07-02T09:25:50.649444636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 09:25:50.668786 containerd[1432]: time="2024-07-02T09:25:50.668738247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 09:25:51.967124 containerd[1432]: time="2024-07-02T09:25:51.967045597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:51.967486 containerd[1432]: time="2024-07-02T09:25:51.967449321Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881373" Jul 2 09:25:51.968382 containerd[1432]: time="2024-07-02T09:25:51.968334193Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:51.971304 containerd[1432]: time="2024-07-02T09:25:51.971272755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:51.972573 containerd[1432]: time="2024-07-02T09:25:51.972540815Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 1.303761253s" Jul 2 09:25:51.972632 containerd[1432]: time="2024-07-02T09:25:51.972575963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 09:25:51.991925 containerd[1432]: time="2024-07-02T09:25:51.991892172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 09:25:52.231059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 09:25:52.242542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:25:52.334901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:25:52.338650 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:25:52.437352 kubelet[1842]: E0702 09:25:52.437287 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:25:52.441556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:25:52.441804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:25:54.051532 containerd[1432]: time="2024-07-02T09:25:54.051486104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:54.052729 containerd[1432]: time="2024-07-02T09:25:54.052056761Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155690" Jul 2 09:25:54.053372 containerd[1432]: time="2024-07-02T09:25:54.053334976Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:54.057415 containerd[1432]: time="2024-07-02T09:25:54.057355515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:54.058107 containerd[1432]: time="2024-07-02T09:25:54.058063752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 2.066132869s" Jul 2 09:25:54.058107 containerd[1432]: time="2024-07-02T09:25:54.058102781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 09:25:54.077284 containerd[1432]: time="2024-07-02T09:25:54.077252699Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 09:25:55.110093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120180482.mount: Deactivated successfully. Jul 2 09:25:55.296627 containerd[1432]: time="2024-07-02T09:25:55.296580761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:55.297143 containerd[1432]: time="2024-07-02T09:25:55.297117341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634094" Jul 2 09:25:55.298465 containerd[1432]: time="2024-07-02T09:25:55.298062170Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:55.300595 containerd[1432]: time="2024-07-02T09:25:55.300561380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:55.301412 containerd[1432]: time="2024-07-02T09:25:55.301367791Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 1.223931118s" Jul 2 09:25:55.301492 containerd[1432]: time="2024-07-02T09:25:55.301412263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 09:25:55.320201 containerd[1432]: time="2024-07-02T09:25:55.320154975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 09:25:55.871944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205267135.mount: Deactivated successfully. Jul 2 09:25:56.591936 containerd[1432]: time="2024-07-02T09:25:56.591878908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:56.592331 containerd[1432]: time="2024-07-02T09:25:56.592284146Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jul 2 09:25:56.593426 containerd[1432]: time="2024-07-02T09:25:56.593360765Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:56.597546 containerd[1432]: time="2024-07-02T09:25:56.597491759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:56.598164 containerd[1432]: time="2024-07-02T09:25:56.598118148Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.277929429s" Jul 2 09:25:56.598164 containerd[1432]: time="2024-07-02T09:25:56.598158096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 09:25:56.617094 containerd[1432]: time="2024-07-02T09:25:56.617032564Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 09:25:57.066631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239492815.mount: Deactivated successfully. Jul 2 09:25:57.071685 containerd[1432]: time="2024-07-02T09:25:57.071641466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:57.072178 containerd[1432]: time="2024-07-02T09:25:57.072145642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 09:25:57.072955 containerd[1432]: time="2024-07-02T09:25:57.072915073Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:57.075226 containerd[1432]: time="2024-07-02T09:25:57.075189144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:57.075926 containerd[1432]: time="2024-07-02T09:25:57.075889650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 458.795924ms" Jul 2 09:25:57.075926 containerd[1432]: time="2024-07-02T09:25:57.075922271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 09:25:57.093355 containerd[1432]: time="2024-07-02T09:25:57.093319834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 09:25:57.637823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719131203.mount: Deactivated successfully. Jul 2 09:25:59.352071 containerd[1432]: time="2024-07-02T09:25:59.352006098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:59.353305 containerd[1432]: time="2024-07-02T09:25:59.353256277Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jul 2 09:25:59.353404 containerd[1432]: time="2024-07-02T09:25:59.353367907Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:59.357197 containerd[1432]: time="2024-07-02T09:25:59.357151267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:25:59.359503 containerd[1432]: time="2024-07-02T09:25:59.359455744Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.26610441s" Jul 2 09:25:59.359550 containerd[1432]: time="2024-07-02T09:25:59.359504054Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 09:26:02.691982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 09:26:02.702558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:26:02.836042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:26:02.839552 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:26:02.874666 kubelet[2062]: E0702 09:26:02.874573 2062 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:26:02.877411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:26:02.877552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:26:04.053303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:26:04.063793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:26:04.083419 systemd[1]: Reloading requested from client PID 2078 ('systemctl') (unit session-7.scope)... Jul 2 09:26:04.083435 systemd[1]: Reloading... Jul 2 09:26:04.144431 zram_generator::config[2115]: No configuration found. Jul 2 09:26:04.430895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:26:04.484346 systemd[1]: Reloading finished in 400 ms. Jul 2 09:26:04.523236 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:26:04.525473 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:26:04.525653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:26:04.527060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:26:04.616470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:26:04.620131 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:26:04.660301 kubelet[2162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:26:04.660301 kubelet[2162]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:26:04.660301 kubelet[2162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:26:04.661161 kubelet[2162]: I0702 09:26:04.661124 2162 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:26:05.136915 kubelet[2162]: I0702 09:26:05.136881 2162 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 09:26:05.136915 kubelet[2162]: I0702 09:26:05.136907 2162 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:26:05.137127 kubelet[2162]: I0702 09:26:05.137111 2162 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 09:26:05.181589 kubelet[2162]: E0702 09:26:05.181547 2162 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.181688 kubelet[2162]: I0702 09:26:05.181661 2162 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:26:05.188636 kubelet[2162]: I0702 09:26:05.188603 2162 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:26:05.189800 kubelet[2162]: I0702 09:26:05.189748 2162 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:26:05.189955 kubelet[2162]: I0702 09:26:05.189792 2162 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:26:05.190041 kubelet[2162]: I0702 09:26:05.190019 2162 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:26:05.190041 kubelet[2162]: I0702 09:26:05.190028 2162 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:26:05.190231 kubelet[2162]: I0702 09:26:05.190205 2162 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:26:05.191131 kubelet[2162]: I0702 09:26:05.191092 2162 kubelet.go:400] "Attempting to sync node with API server" Jul 2 09:26:05.191131 kubelet[2162]: I0702 09:26:05.191112 2162 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:26:05.191715 kubelet[2162]: I0702 09:26:05.191431 2162 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:26:05.191715 kubelet[2162]: I0702 09:26:05.191630 2162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:26:05.191940 kubelet[2162]: W0702 09:26:05.191899 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.192019 kubelet[2162]: E0702 09:26:05.192009 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.192071 kubelet[2162]: W0702 09:26:05.192014 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.192129 kubelet[2162]: E0702 09:26:05.192121 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.194522 kubelet[2162]: I0702 09:26:05.194484 2162 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:26:05.194851 kubelet[2162]: I0702 09:26:05.194838 2162 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:26:05.194951 kubelet[2162]: W0702 09:26:05.194939 2162 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:26:05.195963 kubelet[2162]: I0702 09:26:05.195934 2162 server.go:1264] "Started kubelet" Jul 2 09:26:05.199419 kubelet[2162]: I0702 09:26:05.197106 2162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:26:05.199419 kubelet[2162]: I0702 09:26:05.197221 2162 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:26:05.199419 kubelet[2162]: E0702 09:26:05.197954 2162 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de5b2f044df43e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 09:26:05.19591635 +0000 UTC m=+0.572729470,LastTimestamp:2024-07-02 09:26:05.19591635 +0000 UTC m=+0.572729470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 09:26:05.199419 kubelet[2162]: I0702 09:26:05.198316 2162 server.go:455] "Adding debug handlers to kubelet server" Jul 2 09:26:05.201022 kubelet[2162]: I0702 09:26:05.200228 2162 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:26:05.204194 kubelet[2162]: I0702 09:26:05.204162 2162 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 09:26:05.205885 kubelet[2162]: I0702 09:26:05.205852 2162 reconciler.go:26] "Reconciler: start to sync state" Jul 2 09:26:05.206338 kubelet[2162]: W0702 09:26:05.206278 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.206338 kubelet[2162]: E0702 09:26:05.206332 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.206436 kubelet[2162]: E0702 09:26:05.206401 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Jul 2 09:26:05.208166 kubelet[2162]: I0702 09:26:05.208098 2162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:26:05.208347 kubelet[2162]: I0702 09:26:05.208321 2162 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:26:05.209634 kubelet[2162]: I0702 09:26:05.209232 2162 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:26:05.209634 kubelet[2162]: I0702 09:26:05.209526 2162 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:26:05.212406 kubelet[2162]: I0702 09:26:05.211979 2162 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:26:05.217540 kubelet[2162]: I0702 09:26:05.217491 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:26:05.218276 kubelet[2162]: E0702 09:26:05.218136 2162 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:26:05.219122 kubelet[2162]: I0702 09:26:05.218416 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:26:05.219122 kubelet[2162]: I0702 09:26:05.218449 2162 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:26:05.219122 kubelet[2162]: I0702 09:26:05.218466 2162 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 09:26:05.219122 kubelet[2162]: E0702 09:26:05.218515 2162 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:26:05.221468 kubelet[2162]: W0702 09:26:05.221370 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.221468 kubelet[2162]: E0702 09:26:05.221454 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:05.222101 kubelet[2162]: I0702 09:26:05.222072 2162 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:26:05.222101 kubelet[2162]: I0702 09:26:05.222090 2162 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:26:05.222159 kubelet[2162]: I0702 09:26:05.222130 2162 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:26:05.294496 kubelet[2162]: I0702 09:26:05.294453 2162 policy_none.go:49] "None policy: Start" Jul 2 09:26:05.295340 kubelet[2162]: I0702 09:26:05.295308 2162 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:26:05.295340 kubelet[2162]: I0702 09:26:05.295341 2162 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:26:05.300928 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 09:26:05.301584 kubelet[2162]: I0702 09:26:05.301283 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:26:05.303476 kubelet[2162]: E0702 09:26:05.303430 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 2 09:26:05.316187 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 09:26:05.318836 kubelet[2162]: E0702 09:26:05.318617 2162 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 09:26:05.319192 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 09:26:05.328112 kubelet[2162]: I0702 09:26:05.328078 2162 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:26:05.328834 kubelet[2162]: I0702 09:26:05.328425 2162 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 09:26:05.328834 kubelet[2162]: I0702 09:26:05.328525 2162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:26:05.330109 kubelet[2162]: E0702 09:26:05.330093 2162 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 09:26:05.407050 kubelet[2162]: E0702 09:26:05.406921 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Jul 2 09:26:05.505563 kubelet[2162]: I0702 09:26:05.505439 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:26:05.505853 kubelet[2162]: E0702 09:26:05.505805 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 2 09:26:05.519183 kubelet[2162]: I0702 09:26:05.519111 2162 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:26:05.520126 kubelet[2162]: I0702 09:26:05.520083 2162 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:26:05.520936 kubelet[2162]: I0702 09:26:05.520897 2162 topology_manager.go:215] "Topology Admit Handler" podUID="004ccff6682b4ac8ea9f7d8f974ebad8" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:26:05.526229 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 09:26:05.537650 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 09:26:05.550770 systemd[1]: Created slice kubepods-burstable-pod004ccff6682b4ac8ea9f7d8f974ebad8.slice - libcontainer container kubepods-burstable-pod004ccff6682b4ac8ea9f7d8f974ebad8.slice. Jul 2 09:26:05.608687 kubelet[2162]: I0702 09:26:05.608547 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:05.608687 kubelet[2162]: I0702 09:26:05.608587 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:05.608687 kubelet[2162]: I0702 09:26:05.608611 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:05.608687 kubelet[2162]: I0702 09:26:05.608629 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/004ccff6682b4ac8ea9f7d8f974ebad8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"004ccff6682b4ac8ea9f7d8f974ebad8\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:05.608687 kubelet[2162]: I0702 09:26:05.608658 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/004ccff6682b4ac8ea9f7d8f974ebad8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"004ccff6682b4ac8ea9f7d8f974ebad8\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:05.608898 kubelet[2162]: I0702 09:26:05.608723 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:05.608898 kubelet[2162]: I0702 09:26:05.608782 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:05.608898 kubelet[2162]: I0702 09:26:05.608819 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:26:05.608898 kubelet[2162]: I0702 09:26:05.608847 2162 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/004ccff6682b4ac8ea9f7d8f974ebad8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"004ccff6682b4ac8ea9f7d8f974ebad8\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:05.808324 kubelet[2162]: E0702 09:26:05.808161 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Jul 2 09:26:05.835556 kubelet[2162]: E0702 09:26:05.835512 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:05.838123 containerd[1432]: time="2024-07-02T09:26:05.837982722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:05.848197 kubelet[2162]: E0702 09:26:05.848163 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:05.850747 containerd[1432]: time="2024-07-02T09:26:05.850513183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:05.853269 kubelet[2162]: E0702 09:26:05.853236 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:05.853600 containerd[1432]: time="2024-07-02T09:26:05.853560314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:004ccff6682b4ac8ea9f7d8f974ebad8,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:05.907692 kubelet[2162]: I0702 09:26:05.907658 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:26:05.908019 kubelet[2162]: E0702 09:26:05.907980 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 2 09:26:06.143878 kubelet[2162]: W0702 09:26:06.143728 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.143878 kubelet[2162]: E0702 09:26:06.143793 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.171113 kubelet[2162]: W0702 09:26:06.171056 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.171113 kubelet[2162]: E0702 09:26:06.171095 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.247883 kubelet[2162]: W0702 09:26:06.247858 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.247961 kubelet[2162]: E0702 09:26:06.247889 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.256516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980324467.mount: Deactivated successfully. Jul 2 09:26:06.262048 containerd[1432]: time="2024-07-02T09:26:06.262004463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:26:06.262847 containerd[1432]: time="2024-07-02T09:26:06.262824993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:26:06.263548 containerd[1432]: time="2024-07-02T09:26:06.263525263Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:26:06.264153 containerd[1432]: time="2024-07-02T09:26:06.264047244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:26:06.264769 containerd[1432]: time="2024-07-02T09:26:06.264724742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 09:26:06.265229 containerd[1432]: time="2024-07-02T09:26:06.265207823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:26:06.266407 containerd[1432]: time="2024-07-02T09:26:06.265930424Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:26:06.270052 containerd[1432]: time="2024-07-02T09:26:06.270010342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:26:06.270916 containerd[1432]: time="2024-07-02T09:26:06.270891102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 420.295396ms" Jul 2 09:26:06.271640 containerd[1432]: time="2024-07-02T09:26:06.271505649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 433.424236ms" Jul 2 09:26:06.273780 containerd[1432]: time="2024-07-02T09:26:06.273681656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 420.043382ms" Jul 2 09:26:06.431648 containerd[1432]: time="2024-07-02T09:26:06.431339446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:06.432723 containerd[1432]: time="2024-07-02T09:26:06.431570882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:06.432723 containerd[1432]: time="2024-07-02T09:26:06.432690801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:06.432723 containerd[1432]: time="2024-07-02T09:26:06.432302847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:06.432723 containerd[1432]: time="2024-07-02T09:26:06.432699125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:06.433001 containerd[1432]: time="2024-07-02T09:26:06.432710291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:06.433001 containerd[1432]: time="2024-07-02T09:26:06.432755033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:06.433001 containerd[1432]: time="2024-07-02T09:26:06.432797694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:06.433001 containerd[1432]: time="2024-07-02T09:26:06.432811862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:06.433139 containerd[1432]: time="2024-07-02T09:26:06.432722897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:06.433225 containerd[1432]: time="2024-07-02T09:26:06.432738585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:06.433225 containerd[1432]: time="2024-07-02T09:26:06.432762917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:06.455626 systemd[1]: Started cri-containerd-30c6de0ee5e9fc806f3b64954374d5a2673bb00b42a491efe0e09b8d86367984.scope - libcontainer container 30c6de0ee5e9fc806f3b64954374d5a2673bb00b42a491efe0e09b8d86367984. Jul 2 09:26:06.457271 systemd[1]: Started cri-containerd-49e3666aa75f53566f6076c11db3f67d45ae87aaa8357beea0c10b64aa9b0e39.scope - libcontainer container 49e3666aa75f53566f6076c11db3f67d45ae87aaa8357beea0c10b64aa9b0e39. Jul 2 09:26:06.459901 systemd[1]: Started cri-containerd-b9eba89271fd675e3ff2ad15be526e4dd76e5ca42ef8c58f313a2612bc1bbce1.scope - libcontainer container b9eba89271fd675e3ff2ad15be526e4dd76e5ca42ef8c58f313a2612bc1bbce1. Jul 2 09:26:06.491033 containerd[1432]: time="2024-07-02T09:26:06.490379216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9eba89271fd675e3ff2ad15be526e4dd76e5ca42ef8c58f313a2612bc1bbce1\"" Jul 2 09:26:06.491033 containerd[1432]: time="2024-07-02T09:26:06.490415715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:004ccff6682b4ac8ea9f7d8f974ebad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"49e3666aa75f53566f6076c11db3f67d45ae87aaa8357beea0c10b64aa9b0e39\"" Jul 2 09:26:06.494369 containerd[1432]: time="2024-07-02T09:26:06.494335152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"30c6de0ee5e9fc806f3b64954374d5a2673bb00b42a491efe0e09b8d86367984\"" Jul 2 09:26:06.494548 kubelet[2162]: E0702 09:26:06.494520 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:06.496054 kubelet[2162]: E0702 09:26:06.496033 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:06.497403 kubelet[2162]: E0702 09:26:06.497352 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:06.508527 containerd[1432]: time="2024-07-02T09:26:06.508498307Z" level=info msg="CreateContainer within sandbox \"30c6de0ee5e9fc806f3b64954374d5a2673bb00b42a491efe0e09b8d86367984\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 09:26:06.508722 containerd[1432]: time="2024-07-02T09:26:06.508689402Z" level=info msg="CreateContainer within sandbox \"b9eba89271fd675e3ff2ad15be526e4dd76e5ca42ef8c58f313a2612bc1bbce1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 09:26:06.508794 containerd[1432]: time="2024-07-02T09:26:06.508762799Z" level=info msg="CreateContainer within sandbox \"49e3666aa75f53566f6076c11db3f67d45ae87aaa8357beea0c10b64aa9b0e39\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 09:26:06.527655 containerd[1432]: time="2024-07-02T09:26:06.527579998Z" level=info msg="CreateContainer within sandbox \"b9eba89271fd675e3ff2ad15be526e4dd76e5ca42ef8c58f313a2612bc1bbce1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e91e289245ef40f1c89540f1da31f1f123bcaf6107f609d123f78d670905105d\"" Jul 2 09:26:06.528535 containerd[1432]: time="2024-07-02T09:26:06.528510023Z" level=info msg="StartContainer for \"e91e289245ef40f1c89540f1da31f1f123bcaf6107f609d123f78d670905105d\"" Jul 2 09:26:06.531356 containerd[1432]: time="2024-07-02T09:26:06.531323588Z" level=info msg="CreateContainer within sandbox \"49e3666aa75f53566f6076c11db3f67d45ae87aaa8357beea0c10b64aa9b0e39\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73e387a0c6e2efa80489facb5af1b4fbd80e4fbee4094426ea287991ee67d014\"" Jul 2 09:26:06.531847 containerd[1432]: time="2024-07-02T09:26:06.531727870Z" level=info msg="CreateContainer within sandbox \"30c6de0ee5e9fc806f3b64954374d5a2673bb00b42a491efe0e09b8d86367984\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bd01c7377b5480401479c7c27aba7068c2518c7235f230f42d952e4958855f5\"" Jul 2 09:26:06.532056 containerd[1432]: time="2024-07-02T09:26:06.532030941Z" level=info msg="StartContainer for \"5bd01c7377b5480401479c7c27aba7068c2518c7235f230f42d952e4958855f5\"" Jul 2 09:26:06.532182 containerd[1432]: time="2024-07-02T09:26:06.532134233Z" level=info msg="StartContainer for \"73e387a0c6e2efa80489facb5af1b4fbd80e4fbee4094426ea287991ee67d014\"" Jul 2 09:26:06.557565 systemd[1]: Started cri-containerd-e91e289245ef40f1c89540f1da31f1f123bcaf6107f609d123f78d670905105d.scope - libcontainer container e91e289245ef40f1c89540f1da31f1f123bcaf6107f609d123f78d670905105d. Jul 2 09:26:06.562408 systemd[1]: Started cri-containerd-5bd01c7377b5480401479c7c27aba7068c2518c7235f230f42d952e4958855f5.scope - libcontainer container 5bd01c7377b5480401479c7c27aba7068c2518c7235f230f42d952e4958855f5. Jul 2 09:26:06.564265 systemd[1]: Started cri-containerd-73e387a0c6e2efa80489facb5af1b4fbd80e4fbee4094426ea287991ee67d014.scope - libcontainer container 73e387a0c6e2efa80489facb5af1b4fbd80e4fbee4094426ea287991ee67d014. Jul 2 09:26:06.572857 kubelet[2162]: W0702 09:26:06.572773 2162 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.572857 kubelet[2162]: E0702 09:26:06.572832 2162 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 2 09:26:06.592892 containerd[1432]: time="2024-07-02T09:26:06.592803978Z" level=info msg="StartContainer for \"e91e289245ef40f1c89540f1da31f1f123bcaf6107f609d123f78d670905105d\" returns successfully" Jul 2 09:26:06.600927 containerd[1432]: time="2024-07-02T09:26:06.600851277Z" level=info msg="StartContainer for \"5bd01c7377b5480401479c7c27aba7068c2518c7235f230f42d952e4958855f5\" returns successfully" Jul 2 09:26:06.608733 kubelet[2162]: E0702 09:26:06.608681 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s" Jul 2 09:26:06.611054 containerd[1432]: time="2024-07-02T09:26:06.611022558Z" level=info msg="StartContainer for \"73e387a0c6e2efa80489facb5af1b4fbd80e4fbee4094426ea287991ee67d014\" returns successfully" Jul 2 09:26:06.710416 kubelet[2162]: I0702 09:26:06.710072 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:26:06.710652 kubelet[2162]: E0702 09:26:06.710608 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 2 09:26:07.227119 kubelet[2162]: E0702 09:26:07.227092 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:07.229349 kubelet[2162]: E0702 09:26:07.228548 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:07.229716 kubelet[2162]: E0702 09:26:07.229689 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:08.233479 kubelet[2162]: E0702 09:26:08.233449 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:08.313074 kubelet[2162]: I0702 09:26:08.312787 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:26:08.346469 kubelet[2162]: E0702 09:26:08.346416 2162 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 09:26:08.452609 kubelet[2162]: I0702 09:26:08.450436 2162 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 09:26:09.194043 kubelet[2162]: I0702 09:26:09.193978 2162 apiserver.go:52] "Watching apiserver" Jul 2 09:26:09.204621 kubelet[2162]: I0702 09:26:09.204578 2162 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 09:26:09.240533 kubelet[2162]: E0702 09:26:09.239836 2162 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:09.240533 kubelet[2162]: E0702 09:26:09.240222 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:10.566448 systemd[1]: Reloading requested from client PID 2440 ('systemctl') (unit session-7.scope)... Jul 2 09:26:10.566463 systemd[1]: Reloading... Jul 2 09:26:10.630419 zram_generator::config[2477]: No configuration found. Jul 2 09:26:10.712434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:26:10.777649 systemd[1]: Reloading finished in 210 ms. Jul 2 09:26:10.818513 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:26:10.831432 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:26:10.831788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:26:10.831911 systemd[1]: kubelet.service: Consumed 1.004s CPU time, 118.6M memory peak, 0B memory swap peak. Jul 2 09:26:10.837630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:26:10.935991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:26:10.940338 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:26:10.974723 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:26:10.974723 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:26:10.974723 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:26:10.975103 kubelet[2519]: I0702 09:26:10.974751 2519 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:26:10.980350 kubelet[2519]: I0702 09:26:10.979534 2519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 09:26:10.980350 kubelet[2519]: I0702 09:26:10.980344 2519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:26:10.980567 kubelet[2519]: I0702 09:26:10.980529 2519 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 09:26:10.981799 kubelet[2519]: I0702 09:26:10.981775 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 09:26:10.982914 kubelet[2519]: I0702 09:26:10.982892 2519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:26:10.992758 kubelet[2519]: I0702 09:26:10.992665 2519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:26:10.993079 kubelet[2519]: I0702 09:26:10.993048 2519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:26:10.993325 kubelet[2519]: I0702 09:26:10.993140 2519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993429 2519 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993443 2519 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993480 2519 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993587 2519 kubelet.go:400] "Attempting to sync node with API server" Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993600 2519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993625 2519 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:26:10.993736 kubelet[2519]: I0702 09:26:10.993638 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:26:10.995315 kubelet[2519]: I0702 09:26:10.995292 2519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:26:10.995486 kubelet[2519]: I0702 09:26:10.995471 2519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:26:10.995854 kubelet[2519]: I0702 09:26:10.995829 2519 server.go:1264] "Started kubelet" Jul 2 09:26:10.996454 kubelet[2519]: I0702 09:26:10.996229 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:26:10.996522 kubelet[2519]: I0702 09:26:10.996487 2519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:26:10.996637 kubelet[2519]: I0702 09:26:10.996620 2519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:26:10.996997 kubelet[2519]: I0702 09:26:10.996973 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:26:10.998826 kubelet[2519]: I0702 09:26:10.998802 2519 server.go:455] "Adding debug handlers to kubelet server" Jul 2 09:26:10.999196 kubelet[2519]: I0702 09:26:10.999155 2519 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:26:11.001519 kubelet[2519]: I0702 09:26:11.001488 2519 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 09:26:11.001762 kubelet[2519]: I0702 09:26:11.001733 2519 reconciler.go:26] "Reconciler: start to sync state" Jul 2 09:26:11.005516 kubelet[2519]: I0702 09:26:11.004039 2519 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:26:11.005516 kubelet[2519]: I0702 09:26:11.004223 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:26:11.018237 kubelet[2519]: E0702 09:26:11.018199 2519 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:26:11.020990 kubelet[2519]: I0702 09:26:11.020964 2519 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:26:11.030972 kubelet[2519]: I0702 09:26:11.030791 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:26:11.031793 kubelet[2519]: I0702 09:26:11.031762 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:26:11.031925 kubelet[2519]: I0702 09:26:11.031913 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:26:11.031986 kubelet[2519]: I0702 09:26:11.031977 2519 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 09:26:11.032095 kubelet[2519]: E0702 09:26:11.032075 2519 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:26:11.054334 kubelet[2519]: I0702 09:26:11.054298 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:26:11.054334 kubelet[2519]: I0702 09:26:11.054322 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:26:11.054334 kubelet[2519]: I0702 09:26:11.054341 2519 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:26:11.054514 kubelet[2519]: I0702 09:26:11.054505 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 09:26:11.054539 kubelet[2519]: I0702 09:26:11.054517 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 09:26:11.054539 kubelet[2519]: I0702 09:26:11.054535 2519 policy_none.go:49] "None policy: Start" Jul 2 09:26:11.055121 kubelet[2519]: I0702 09:26:11.055076 2519 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:26:11.055155 kubelet[2519]: I0702 09:26:11.055131 2519 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:26:11.055302 kubelet[2519]: I0702 09:26:11.055267 2519 state_mem.go:75] "Updated machine memory state" Jul 2 09:26:11.059306 kubelet[2519]: I0702 09:26:11.059283 2519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:26:11.059800 kubelet[2519]: I0702 09:26:11.059531 2519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 09:26:11.059800 kubelet[2519]: I0702 09:26:11.059623 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:26:11.105153 kubelet[2519]: I0702 09:26:11.103997 2519 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:26:11.110643 kubelet[2519]: I0702 09:26:11.110583 2519 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 09:26:11.110720 kubelet[2519]: I0702 09:26:11.110681 2519 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 09:26:11.132782 kubelet[2519]: I0702 09:26:11.132724 2519 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:26:11.132900 kubelet[2519]: I0702 09:26:11.132832 2519 topology_manager.go:215] "Topology Admit Handler" podUID="004ccff6682b4ac8ea9f7d8f974ebad8" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:26:11.132900 kubelet[2519]: I0702 09:26:11.132870 2519 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:26:11.202971 kubelet[2519]: I0702 09:26:11.202937 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:11.202971 kubelet[2519]: I0702 09:26:11.202977 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/004ccff6682b4ac8ea9f7d8f974ebad8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"004ccff6682b4ac8ea9f7d8f974ebad8\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:11.203086 kubelet[2519]: I0702 09:26:11.202997 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/004ccff6682b4ac8ea9f7d8f974ebad8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"004ccff6682b4ac8ea9f7d8f974ebad8\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:11.203086 kubelet[2519]: I0702 09:26:11.203019 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:11.203086 kubelet[2519]: I0702 09:26:11.203036 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:11.203086 kubelet[2519]: I0702 09:26:11.203052 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:11.203086 kubelet[2519]: I0702 09:26:11.203070 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:26:11.203192 kubelet[2519]: I0702 09:26:11.203084 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/004ccff6682b4ac8ea9f7d8f974ebad8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"004ccff6682b4ac8ea9f7d8f974ebad8\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:11.203192 kubelet[2519]: I0702 09:26:11.203099 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:26:11.438609 kubelet[2519]: E0702 09:26:11.438513 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:11.439209 kubelet[2519]: E0702 09:26:11.439176 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:11.441108 kubelet[2519]: E0702 09:26:11.441033 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:11.572710 sudo[2555]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 09:26:11.572938 sudo[2555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 09:26:11.994868 kubelet[2519]: I0702 09:26:11.994828 2519 apiserver.go:52] "Watching apiserver" Jul 2 09:26:12.000602 sudo[2555]: pam_unix(sudo:session): session closed for user root Jul 2 09:26:12.002217 kubelet[2519]: I0702 09:26:12.002163 2519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 09:26:12.043110 kubelet[2519]: E0702 09:26:12.042628 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:12.047111 kubelet[2519]: E0702 09:26:12.047071 2519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 09:26:12.047527 kubelet[2519]: E0702 09:26:12.047497 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:12.047901 kubelet[2519]: E0702 09:26:12.047873 2519 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 09:26:12.048158 kubelet[2519]: E0702 09:26:12.048131 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:12.068807 kubelet[2519]: I0702 09:26:12.068735 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.068717428 podStartE2EDuration="1.068717428s" podCreationTimestamp="2024-07-02 09:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:12.058878446 +0000 UTC m=+1.115395453" watchObservedRunningTime="2024-07-02 09:26:12.068717428 +0000 UTC m=+1.125234395" Jul 2 09:26:12.080418 kubelet[2519]: I0702 09:26:12.077264 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.0772465900000001 podStartE2EDuration="1.07724659s" podCreationTimestamp="2024-07-02 09:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:12.068847642 +0000 UTC m=+1.125364649" watchObservedRunningTime="2024-07-02 09:26:12.07724659 +0000 UTC m=+1.133763597" Jul 2 09:26:12.087501 kubelet[2519]: I0702 09:26:12.087443 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.087426553 podStartE2EDuration="1.087426553s" podCreationTimestamp="2024-07-02 09:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:12.07903953 +0000 UTC m=+1.135556537" watchObservedRunningTime="2024-07-02 09:26:12.087426553 +0000 UTC m=+1.143943560" Jul 2 09:26:13.043475 kubelet[2519]: E0702 09:26:13.043444 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:13.043795 kubelet[2519]: E0702 09:26:13.043674 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:13.710875 sudo[1612]: pam_unix(sudo:session): session closed for user root Jul 2 09:26:13.712297 sshd[1609]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:13.715995 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:49140.service: Deactivated successfully. Jul 2 09:26:13.717896 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:26:13.718082 systemd[1]: session-7.scope: Consumed 7.132s CPU time, 136.0M memory peak, 0B memory swap peak. Jul 2 09:26:13.719463 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:26:13.720695 systemd-logind[1418]: Removed session 7. Jul 2 09:26:14.044931 kubelet[2519]: E0702 09:26:14.044854 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:19.651465 kubelet[2519]: E0702 09:26:19.651322 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:19.860145 kubelet[2519]: E0702 09:26:19.859086 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:20.052142 kubelet[2519]: E0702 09:26:20.052116 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:20.052515 kubelet[2519]: E0702 09:26:20.052344 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:20.177139 kubelet[2519]: E0702 09:26:20.177115 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:21.052830 kubelet[2519]: E0702 09:26:21.052764 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:21.054179 kubelet[2519]: E0702 09:26:21.054123 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:24.090997 kubelet[2519]: I0702 09:26:24.090747 2519 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 09:26:24.091334 kubelet[2519]: I0702 09:26:24.091258 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 09:26:24.091363 containerd[1432]: time="2024-07-02T09:26:24.091112968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:26:24.151453 kubelet[2519]: I0702 09:26:24.151414 2519 topology_manager.go:215] "Topology Admit Handler" podUID="ea1095dd-8db5-4325-ab20-ac486cdf9b11" podNamespace="kube-system" podName="kube-proxy-jd8js" Jul 2 09:26:24.156316 kubelet[2519]: I0702 09:26:24.154165 2519 topology_manager.go:215] "Topology Admit Handler" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" podNamespace="kube-system" podName="cilium-b26jp" Jul 2 09:26:24.165800 systemd[1]: Created slice kubepods-besteffort-podea1095dd_8db5_4325_ab20_ac486cdf9b11.slice - libcontainer container kubepods-besteffort-podea1095dd_8db5_4325_ab20_ac486cdf9b11.slice. Jul 2 09:26:24.186366 kubelet[2519]: I0702 09:26:24.186306 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-bpf-maps\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186366 kubelet[2519]: I0702 09:26:24.186349 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cni-path\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186366 kubelet[2519]: I0702 09:26:24.186370 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjvg\" (UniqueName: \"kubernetes.io/projected/ea1095dd-8db5-4325-ab20-ac486cdf9b11-kube-api-access-jdjvg\") pod \"kube-proxy-jd8js\" (UID: \"ea1095dd-8db5-4325-ab20-ac486cdf9b11\") " pod="kube-system/kube-proxy-jd8js" Jul 2 09:26:24.186551 kubelet[2519]: I0702 09:26:24.186396 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-hubble-tls\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186551 kubelet[2519]: I0702 09:26:24.186413 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtzl\" (UniqueName: \"kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186551 kubelet[2519]: I0702 09:26:24.186431 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-run\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186551 kubelet[2519]: I0702 09:26:24.186444 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-cgroup\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186551 kubelet[2519]: I0702 09:26:24.186458 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea1095dd-8db5-4325-ab20-ac486cdf9b11-kube-proxy\") pod \"kube-proxy-jd8js\" (UID: \"ea1095dd-8db5-4325-ab20-ac486cdf9b11\") " pod="kube-system/kube-proxy-jd8js" Jul 2 09:26:24.186551 kubelet[2519]: I0702 09:26:24.186492 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-lib-modules\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186677 kubelet[2519]: I0702 09:26:24.186510 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-xtables-lock\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186677 kubelet[2519]: I0702 09:26:24.186524 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-kernel\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186677 kubelet[2519]: I0702 09:26:24.186538 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea1095dd-8db5-4325-ab20-ac486cdf9b11-xtables-lock\") pod \"kube-proxy-jd8js\" (UID: \"ea1095dd-8db5-4325-ab20-ac486cdf9b11\") " pod="kube-system/kube-proxy-jd8js" Jul 2 09:26:24.186677 kubelet[2519]: I0702 09:26:24.186552 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-config-path\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.186677 kubelet[2519]: I0702 09:26:24.186566 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-net\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.189623 kubelet[2519]: I0702 09:26:24.186581 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea1095dd-8db5-4325-ab20-ac486cdf9b11-lib-modules\") pod \"kube-proxy-jd8js\" (UID: \"ea1095dd-8db5-4325-ab20-ac486cdf9b11\") " pod="kube-system/kube-proxy-jd8js" Jul 2 09:26:24.189623 kubelet[2519]: I0702 09:26:24.186595 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-hostproc\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.189623 kubelet[2519]: I0702 09:26:24.186609 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-etc-cni-netd\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.189623 kubelet[2519]: I0702 09:26:24.186625 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c03fc50d-b353-4b69-81b3-1c55a57d9100-clustermesh-secrets\") pod \"cilium-b26jp\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " pod="kube-system/cilium-b26jp" Jul 2 09:26:24.187734 systemd[1]: Created slice kubepods-burstable-podc03fc50d_b353_4b69_81b3_1c55a57d9100.slice - libcontainer container kubepods-burstable-podc03fc50d_b353_4b69_81b3_1c55a57d9100.slice. Jul 2 09:26:24.301847 kubelet[2519]: E0702 09:26:24.301813 2519 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.301989 kubelet[2519]: E0702 09:26:24.301975 2519 projected.go:200] Error preparing data for projected volume kube-api-access-pxtzl for pod kube-system/cilium-b26jp: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.303180 kubelet[2519]: E0702 09:26:24.302092 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl podName:c03fc50d-b353-4b69-81b3-1c55a57d9100 nodeName:}" failed. No retries permitted until 2024-07-02 09:26:24.802072965 +0000 UTC m=+13.858589972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pxtzl" (UniqueName: "kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl") pod "cilium-b26jp" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100") : configmap "kube-root-ca.crt" not found Jul 2 09:26:24.306062 kubelet[2519]: E0702 09:26:24.305885 2519 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.306062 kubelet[2519]: E0702 09:26:24.305914 2519 projected.go:200] Error preparing data for projected volume kube-api-access-jdjvg for pod kube-system/kube-proxy-jd8js: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.306062 kubelet[2519]: E0702 09:26:24.305969 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1095dd-8db5-4325-ab20-ac486cdf9b11-kube-api-access-jdjvg podName:ea1095dd-8db5-4325-ab20-ac486cdf9b11 nodeName:}" failed. No retries permitted until 2024-07-02 09:26:24.80595478 +0000 UTC m=+13.862471787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jdjvg" (UniqueName: "kubernetes.io/projected/ea1095dd-8db5-4325-ab20-ac486cdf9b11-kube-api-access-jdjvg") pod "kube-proxy-jd8js" (UID: "ea1095dd-8db5-4325-ab20-ac486cdf9b11") : configmap "kube-root-ca.crt" not found Jul 2 09:26:24.894068 kubelet[2519]: E0702 09:26:24.894014 2519 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.894296 kubelet[2519]: E0702 09:26:24.894253 2519 projected.go:200] Error preparing data for projected volume kube-api-access-pxtzl for pod kube-system/cilium-b26jp: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.894338 kubelet[2519]: E0702 09:26:24.894324 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl podName:c03fc50d-b353-4b69-81b3-1c55a57d9100 nodeName:}" failed. No retries permitted until 2024-07-02 09:26:25.894292525 +0000 UTC m=+14.950809532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pxtzl" (UniqueName: "kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl") pod "cilium-b26jp" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100") : configmap "kube-root-ca.crt" not found Jul 2 09:26:24.894415 kubelet[2519]: E0702 09:26:24.894042 2519 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.894415 kubelet[2519]: E0702 09:26:24.894355 2519 projected.go:200] Error preparing data for projected volume kube-api-access-jdjvg for pod kube-system/kube-proxy-jd8js: configmap "kube-root-ca.crt" not found Jul 2 09:26:24.894415 kubelet[2519]: E0702 09:26:24.894407 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1095dd-8db5-4325-ab20-ac486cdf9b11-kube-api-access-jdjvg podName:ea1095dd-8db5-4325-ab20-ac486cdf9b11 nodeName:}" failed. No retries permitted until 2024-07-02 09:26:25.894379629 +0000 UTC m=+14.950896636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jdjvg" (UniqueName: "kubernetes.io/projected/ea1095dd-8db5-4325-ab20-ac486cdf9b11-kube-api-access-jdjvg") pod "kube-proxy-jd8js" (UID: "ea1095dd-8db5-4325-ab20-ac486cdf9b11") : configmap "kube-root-ca.crt" not found Jul 2 09:26:25.133038 update_engine[1424]: I0702 09:26:25.132478 1424 update_attempter.cc:509] Updating boot flags... Jul 2 09:26:25.151495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2605) Jul 2 09:26:25.184880 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2606) Jul 2 09:26:25.217914 kubelet[2519]: I0702 09:26:25.217855 2519 topology_manager.go:215] "Topology Admit Handler" podUID="fc233a06-da13-47b1-aecb-275e26d2fba7" podNamespace="kube-system" podName="cilium-operator-599987898-l4rrb" Jul 2 09:26:25.229510 systemd[1]: Created slice kubepods-besteffort-podfc233a06_da13_47b1_aecb_275e26d2fba7.slice - libcontainer container kubepods-besteffort-podfc233a06_da13_47b1_aecb_275e26d2fba7.slice. Jul 2 09:26:25.299002 kubelet[2519]: I0702 09:26:25.298953 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc233a06-da13-47b1-aecb-275e26d2fba7-cilium-config-path\") pod \"cilium-operator-599987898-l4rrb\" (UID: \"fc233a06-da13-47b1-aecb-275e26d2fba7\") " pod="kube-system/cilium-operator-599987898-l4rrb" Jul 2 09:26:25.299174 kubelet[2519]: I0702 09:26:25.299055 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mxst\" (UniqueName: \"kubernetes.io/projected/fc233a06-da13-47b1-aecb-275e26d2fba7-kube-api-access-6mxst\") pod \"cilium-operator-599987898-l4rrb\" (UID: \"fc233a06-da13-47b1-aecb-275e26d2fba7\") " pod="kube-system/cilium-operator-599987898-l4rrb" Jul 2 09:26:25.533326 kubelet[2519]: E0702 09:26:25.533293 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:25.533758 containerd[1432]: time="2024-07-02T09:26:25.533726234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l4rrb,Uid:fc233a06-da13-47b1-aecb-275e26d2fba7,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:25.555330 containerd[1432]: time="2024-07-02T09:26:25.555227390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:25.555330 containerd[1432]: time="2024-07-02T09:26:25.555287286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:25.555484 containerd[1432]: time="2024-07-02T09:26:25.555317175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:25.555484 containerd[1432]: time="2024-07-02T09:26:25.555333539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:25.576560 systemd[1]: Started cri-containerd-5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464.scope - libcontainer container 5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464. Jul 2 09:26:25.600738 containerd[1432]: time="2024-07-02T09:26:25.600697017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l4rrb,Uid:fc233a06-da13-47b1-aecb-275e26d2fba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464\"" Jul 2 09:26:25.601522 kubelet[2519]: E0702 09:26:25.601436 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:25.602578 containerd[1432]: time="2024-07-02T09:26:25.602548764Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 09:26:25.984990 kubelet[2519]: E0702 09:26:25.984855 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:25.985922 containerd[1432]: time="2024-07-02T09:26:25.985885413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jd8js,Uid:ea1095dd-8db5-4325-ab20-ac486cdf9b11,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:25.991794 kubelet[2519]: E0702 09:26:25.991545 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:25.992279 containerd[1432]: time="2024-07-02T09:26:25.992244551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b26jp,Uid:c03fc50d-b353-4b69-81b3-1c55a57d9100,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:26.005164 containerd[1432]: time="2024-07-02T09:26:26.005023204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:26.005164 containerd[1432]: time="2024-07-02T09:26:26.005102265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:26.005164 containerd[1432]: time="2024-07-02T09:26:26.005124831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:26.005164 containerd[1432]: time="2024-07-02T09:26:26.005142396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:26.010789 containerd[1432]: time="2024-07-02T09:26:26.010680782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:26.010789 containerd[1432]: time="2024-07-02T09:26:26.010734357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:26.010789 containerd[1432]: time="2024-07-02T09:26:26.010755442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:26.010789 containerd[1432]: time="2024-07-02T09:26:26.010765965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:26.024560 systemd[1]: Started cri-containerd-cebe6d1584ecfc0e6352f304ad429daa7bfbc1b6ea437522d8439e5f8a60086f.scope - libcontainer container cebe6d1584ecfc0e6352f304ad429daa7bfbc1b6ea437522d8439e5f8a60086f. Jul 2 09:26:26.028924 systemd[1]: Started cri-containerd-2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8.scope - libcontainer container 2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8. Jul 2 09:26:26.051516 containerd[1432]: time="2024-07-02T09:26:26.051473423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jd8js,Uid:ea1095dd-8db5-4325-ab20-ac486cdf9b11,Namespace:kube-system,Attempt:0,} returns sandbox id \"cebe6d1584ecfc0e6352f304ad429daa7bfbc1b6ea437522d8439e5f8a60086f\"" Jul 2 09:26:26.052002 containerd[1432]: time="2024-07-02T09:26:26.051978797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b26jp,Uid:c03fc50d-b353-4b69-81b3-1c55a57d9100,Namespace:kube-system,Attempt:0,} returns sandbox id \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\"" Jul 2 09:26:26.053193 kubelet[2519]: E0702 09:26:26.052930 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:26.053193 kubelet[2519]: E0702 09:26:26.052964 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:26.057556 containerd[1432]: time="2024-07-02T09:26:26.056767945Z" level=info msg="CreateContainer within sandbox \"cebe6d1584ecfc0e6352f304ad429daa7bfbc1b6ea437522d8439e5f8a60086f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:26:26.073280 containerd[1432]: time="2024-07-02T09:26:26.073235065Z" level=info msg="CreateContainer within sandbox \"cebe6d1584ecfc0e6352f304ad429daa7bfbc1b6ea437522d8439e5f8a60086f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c0a4ed73c23cae5576ed283d7b80af52459bb55ccb1fac3db861736808dfa05\"" Jul 2 09:26:26.073817 containerd[1432]: time="2024-07-02T09:26:26.073692666Z" level=info msg="StartContainer for \"7c0a4ed73c23cae5576ed283d7b80af52459bb55ccb1fac3db861736808dfa05\"" Jul 2 09:26:26.096566 systemd[1]: Started cri-containerd-7c0a4ed73c23cae5576ed283d7b80af52459bb55ccb1fac3db861736808dfa05.scope - libcontainer container 7c0a4ed73c23cae5576ed283d7b80af52459bb55ccb1fac3db861736808dfa05. Jul 2 09:26:26.121822 containerd[1432]: time="2024-07-02T09:26:26.121771076Z" level=info msg="StartContainer for \"7c0a4ed73c23cae5576ed283d7b80af52459bb55ccb1fac3db861736808dfa05\" returns successfully" Jul 2 09:26:26.417989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293401331.mount: Deactivated successfully. Jul 2 09:26:27.055881 containerd[1432]: time="2024-07-02T09:26:27.055820807Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:26:27.057044 containerd[1432]: time="2024-07-02T09:26:27.056996909Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138350" Jul 2 09:26:27.057937 containerd[1432]: time="2024-07-02T09:26:27.057903822Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:26:27.059404 containerd[1432]: time="2024-07-02T09:26:27.059353153Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.456764499s" Jul 2 09:26:27.059451 containerd[1432]: time="2024-07-02T09:26:27.059406967Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 09:26:27.060558 containerd[1432]: time="2024-07-02T09:26:27.060491365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 09:26:27.062413 containerd[1432]: time="2024-07-02T09:26:27.062340280Z" level=info msg="CreateContainer within sandbox \"5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 09:26:27.066365 kubelet[2519]: E0702 09:26:27.066338 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:27.081052 kubelet[2519]: I0702 09:26:27.080968 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jd8js" podStartSLOduration=3.078934936 podStartE2EDuration="3.078934936s" podCreationTimestamp="2024-07-02 09:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:27.07863818 +0000 UTC m=+16.135155187" watchObservedRunningTime="2024-07-02 09:26:27.078934936 +0000 UTC m=+16.135451943" Jul 2 09:26:27.082784 containerd[1432]: time="2024-07-02T09:26:27.082730109Z" level=info msg="CreateContainer within sandbox \"5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\"" Jul 2 09:26:27.084020 containerd[1432]: time="2024-07-02T09:26:27.083939620Z" level=info msg="StartContainer for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\"" Jul 2 09:26:27.114549 systemd[1]: Started cri-containerd-31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797.scope - libcontainer container 31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797. Jul 2 09:26:27.148987 containerd[1432]: time="2024-07-02T09:26:27.148861112Z" level=info msg="StartContainer for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" returns successfully" Jul 2 09:26:28.070598 kubelet[2519]: E0702 09:26:28.070447 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:28.071662 kubelet[2519]: E0702 09:26:28.071504 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:28.081500 kubelet[2519]: I0702 09:26:28.081437 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l4rrb" podStartSLOduration=1.623263369 podStartE2EDuration="3.081383543s" podCreationTimestamp="2024-07-02 09:26:25 +0000 UTC" firstStartedPulling="2024-07-02 09:26:25.602174021 +0000 UTC m=+14.658690988" lastFinishedPulling="2024-07-02 09:26:27.060294195 +0000 UTC m=+16.116811162" observedRunningTime="2024-07-02 09:26:28.080454992 +0000 UTC m=+17.136972039" watchObservedRunningTime="2024-07-02 09:26:28.081383543 +0000 UTC m=+17.137900630" Jul 2 09:26:29.071919 kubelet[2519]: E0702 09:26:29.071888 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:37.578219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339400564.mount: Deactivated successfully. Jul 2 09:26:38.823868 containerd[1432]: time="2024-07-02T09:26:38.823569254Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:26:38.825532 containerd[1432]: time="2024-07-02T09:26:38.825490922Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651506" Jul 2 09:26:38.826473 containerd[1432]: time="2024-07-02T09:26:38.826418810Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:26:38.828809 containerd[1432]: time="2024-07-02T09:26:38.828706703Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.768160205s" Jul 2 09:26:38.828809 containerd[1432]: time="2024-07-02T09:26:38.828738429Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 09:26:38.832039 containerd[1432]: time="2024-07-02T09:26:38.831918364Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:26:38.845766 containerd[1432]: time="2024-07-02T09:26:38.845655970Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa\"" Jul 2 09:26:38.846803 containerd[1432]: time="2024-07-02T09:26:38.846039599Z" level=info msg="StartContainer for \"1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa\"" Jul 2 09:26:38.881537 systemd[1]: Started cri-containerd-1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa.scope - libcontainer container 1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa. Jul 2 09:26:38.901709 containerd[1432]: time="2024-07-02T09:26:38.901674344Z" level=info msg="StartContainer for \"1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa\" returns successfully" Jul 2 09:26:38.939567 systemd[1]: cri-containerd-1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa.scope: Deactivated successfully. Jul 2 09:26:39.083833 containerd[1432]: time="2024-07-02T09:26:39.083705729Z" level=info msg="shim disconnected" id=1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa namespace=k8s.io Jul 2 09:26:39.084070 containerd[1432]: time="2024-07-02T09:26:39.083999741Z" level=warning msg="cleaning up after shim disconnected" id=1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa namespace=k8s.io Jul 2 09:26:39.084070 containerd[1432]: time="2024-07-02T09:26:39.084019144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:39.091897 kubelet[2519]: E0702 09:26:39.090329 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:39.843079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa-rootfs.mount: Deactivated successfully. Jul 2 09:26:40.093088 kubelet[2519]: E0702 09:26:40.093031 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:40.098726 containerd[1432]: time="2024-07-02T09:26:40.098601656Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:26:40.118373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603128551.mount: Deactivated successfully. Jul 2 09:26:40.118724 containerd[1432]: time="2024-07-02T09:26:40.118673103Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd\"" Jul 2 09:26:40.122325 containerd[1432]: time="2024-07-02T09:26:40.122289757Z" level=info msg="StartContainer for \"d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd\"" Jul 2 09:26:40.157558 systemd[1]: Started cri-containerd-d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd.scope - libcontainer container d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd. Jul 2 09:26:40.178244 containerd[1432]: time="2024-07-02T09:26:40.178112355Z" level=info msg="StartContainer for \"d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd\" returns successfully" Jul 2 09:26:40.209413 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:26:40.209644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:26:40.209707 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:26:40.216981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:26:40.217150 systemd[1]: cri-containerd-d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd.scope: Deactivated successfully. Jul 2 09:26:40.235064 containerd[1432]: time="2024-07-02T09:26:40.234995892Z" level=info msg="shim disconnected" id=d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd namespace=k8s.io Jul 2 09:26:40.235064 containerd[1432]: time="2024-07-02T09:26:40.235059423Z" level=warning msg="cleaning up after shim disconnected" id=d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd namespace=k8s.io Jul 2 09:26:40.235064 containerd[1432]: time="2024-07-02T09:26:40.235068745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:40.240626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:26:40.843341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd-rootfs.mount: Deactivated successfully. Jul 2 09:26:41.096058 kubelet[2519]: E0702 09:26:41.095945 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:41.098456 containerd[1432]: time="2024-07-02T09:26:41.098293662Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:26:41.115119 containerd[1432]: time="2024-07-02T09:26:41.115076182Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb\"" Jul 2 09:26:41.116844 containerd[1432]: time="2024-07-02T09:26:41.115886475Z" level=info msg="StartContainer for \"a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb\"" Jul 2 09:26:41.145561 systemd[1]: Started cri-containerd-a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb.scope - libcontainer container a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb. Jul 2 09:26:41.168789 containerd[1432]: time="2024-07-02T09:26:41.168747889Z" level=info msg="StartContainer for \"a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb\" returns successfully" Jul 2 09:26:41.188140 systemd[1]: cri-containerd-a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb.scope: Deactivated successfully. Jul 2 09:26:41.205940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb-rootfs.mount: Deactivated successfully. Jul 2 09:26:41.212090 containerd[1432]: time="2024-07-02T09:26:41.212033409Z" level=info msg="shim disconnected" id=a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb namespace=k8s.io Jul 2 09:26:41.212090 containerd[1432]: time="2024-07-02T09:26:41.212090458Z" level=warning msg="cleaning up after shim disconnected" id=a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb namespace=k8s.io Jul 2 09:26:41.212090 containerd[1432]: time="2024-07-02T09:26:41.212099620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:41.224284 containerd[1432]: time="2024-07-02T09:26:41.224240416Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:26:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 09:26:42.099248 kubelet[2519]: E0702 09:26:42.099218 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:42.102431 containerd[1432]: time="2024-07-02T09:26:42.101915492Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:26:42.118158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294600506.mount: Deactivated successfully. Jul 2 09:26:42.119346 containerd[1432]: time="2024-07-02T09:26:42.119309304Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d\"" Jul 2 09:26:42.121185 containerd[1432]: time="2024-07-02T09:26:42.121143316Z" level=info msg="StartContainer for \"6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d\"" Jul 2 09:26:42.149600 systemd[1]: Started cri-containerd-6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d.scope - libcontainer container 6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d. Jul 2 09:26:42.169592 systemd[1]: cri-containerd-6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d.scope: Deactivated successfully. Jul 2 09:26:42.171747 containerd[1432]: time="2024-07-02T09:26:42.171301588Z" level=info msg="StartContainer for \"6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d\" returns successfully" Jul 2 09:26:42.187168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d-rootfs.mount: Deactivated successfully. Jul 2 09:26:42.193237 containerd[1432]: time="2024-07-02T09:26:42.193178193Z" level=info msg="shim disconnected" id=6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d namespace=k8s.io Jul 2 09:26:42.193237 containerd[1432]: time="2024-07-02T09:26:42.193237003Z" level=warning msg="cleaning up after shim disconnected" id=6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d namespace=k8s.io Jul 2 09:26:42.193367 containerd[1432]: time="2024-07-02T09:26:42.193246284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:26:43.102312 kubelet[2519]: E0702 09:26:43.102276 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:43.104672 containerd[1432]: time="2024-07-02T09:26:43.104539967Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:26:43.120822 containerd[1432]: time="2024-07-02T09:26:43.120776313Z" level=info msg="CreateContainer within sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\"" Jul 2 09:26:43.122360 containerd[1432]: time="2024-07-02T09:26:43.122333234Z" level=info msg="StartContainer for \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\"" Jul 2 09:26:43.151538 systemd[1]: Started cri-containerd-6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191.scope - libcontainer container 6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191. Jul 2 09:26:43.190538 containerd[1432]: time="2024-07-02T09:26:43.190488314Z" level=info msg="StartContainer for \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\" returns successfully" Jul 2 09:26:43.342269 kubelet[2519]: I0702 09:26:43.341922 2519 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 09:26:43.356904 kubelet[2519]: I0702 09:26:43.356707 2519 topology_manager.go:215] "Topology Admit Handler" podUID="2945d988-5928-46ae-8e4b-9fe80a83c811" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pkclj" Jul 2 09:26:43.358435 kubelet[2519]: I0702 09:26:43.358399 2519 topology_manager.go:215] "Topology Admit Handler" podUID="d88968a2-cd9b-4635-a305-0c4bedf1b070" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z8zzz" Jul 2 09:26:43.368349 systemd[1]: Created slice kubepods-burstable-pod2945d988_5928_46ae_8e4b_9fe80a83c811.slice - libcontainer container kubepods-burstable-pod2945d988_5928_46ae_8e4b_9fe80a83c811.slice. Jul 2 09:26:43.374714 systemd[1]: Created slice kubepods-burstable-podd88968a2_cd9b_4635_a305_0c4bedf1b070.slice - libcontainer container kubepods-burstable-podd88968a2_cd9b_4635_a305_0c4bedf1b070.slice. Jul 2 09:26:43.434440 kubelet[2519]: I0702 09:26:43.434401 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqzl\" (UniqueName: \"kubernetes.io/projected/d88968a2-cd9b-4635-a305-0c4bedf1b070-kube-api-access-zbqzl\") pod \"coredns-7db6d8ff4d-z8zzz\" (UID: \"d88968a2-cd9b-4635-a305-0c4bedf1b070\") " pod="kube-system/coredns-7db6d8ff4d-z8zzz" Jul 2 09:26:43.434440 kubelet[2519]: I0702 09:26:43.434441 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjd6\" (UniqueName: \"kubernetes.io/projected/2945d988-5928-46ae-8e4b-9fe80a83c811-kube-api-access-lgjd6\") pod \"coredns-7db6d8ff4d-pkclj\" (UID: \"2945d988-5928-46ae-8e4b-9fe80a83c811\") " pod="kube-system/coredns-7db6d8ff4d-pkclj" Jul 2 09:26:43.434597 kubelet[2519]: I0702 09:26:43.434465 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d88968a2-cd9b-4635-a305-0c4bedf1b070-config-volume\") pod \"coredns-7db6d8ff4d-z8zzz\" (UID: \"d88968a2-cd9b-4635-a305-0c4bedf1b070\") " pod="kube-system/coredns-7db6d8ff4d-z8zzz" Jul 2 09:26:43.434597 kubelet[2519]: I0702 09:26:43.434484 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2945d988-5928-46ae-8e4b-9fe80a83c811-config-volume\") pod \"coredns-7db6d8ff4d-pkclj\" (UID: \"2945d988-5928-46ae-8e4b-9fe80a83c811\") " pod="kube-system/coredns-7db6d8ff4d-pkclj" Jul 2 09:26:43.545694 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:52442.service - OpenSSH per-connection server daemon (10.0.0.1:52442). Jul 2 09:26:43.581123 sshd[3294]: Accepted publickey for core from 10.0.0.1 port 52442 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:43.582497 sshd[3294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:43.589597 systemd-logind[1418]: New session 8 of user core. Jul 2 09:26:43.595514 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 09:26:43.672992 kubelet[2519]: E0702 09:26:43.672870 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:43.674190 containerd[1432]: time="2024-07-02T09:26:43.673798595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkclj,Uid:2945d988-5928-46ae-8e4b-9fe80a83c811,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:43.677525 kubelet[2519]: E0702 09:26:43.677490 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:43.678328 containerd[1432]: time="2024-07-02T09:26:43.678117101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8zzz,Uid:d88968a2-cd9b-4635-a305-0c4bedf1b070,Namespace:kube-system,Attempt:0,}" Jul 2 09:26:43.776876 sshd[3294]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:43.783905 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:52442.service: Deactivated successfully. Jul 2 09:26:43.786293 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 09:26:43.788886 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Jul 2 09:26:43.791206 systemd-logind[1418]: Removed session 8. Jul 2 09:26:44.107618 kubelet[2519]: E0702 09:26:44.107575 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:44.120520 kubelet[2519]: I0702 09:26:44.119923 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b26jp" podStartSLOduration=7.345292903 podStartE2EDuration="20.119907762s" podCreationTimestamp="2024-07-02 09:26:24 +0000 UTC" firstStartedPulling="2024-07-02 09:26:26.054667749 +0000 UTC m=+15.111184756" lastFinishedPulling="2024-07-02 09:26:38.829282608 +0000 UTC m=+27.885799615" observedRunningTime="2024-07-02 09:26:44.118843963 +0000 UTC m=+33.175360970" watchObservedRunningTime="2024-07-02 09:26:44.119907762 +0000 UTC m=+33.176424769" Jul 2 09:26:45.109483 kubelet[2519]: E0702 09:26:45.109432 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:45.451919 systemd-networkd[1374]: cilium_host: Link UP Jul 2 09:26:45.453008 systemd-networkd[1374]: cilium_net: Link UP Jul 2 09:26:45.453337 systemd-networkd[1374]: cilium_net: Gained carrier Jul 2 09:26:45.453618 systemd-networkd[1374]: cilium_host: Gained carrier Jul 2 09:26:45.538638 systemd-networkd[1374]: cilium_vxlan: Link UP Jul 2 09:26:45.538643 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jul 2 09:26:45.667553 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jul 2 09:26:45.832411 kernel: NET: Registered PF_ALG protocol family Jul 2 09:26:46.112184 kubelet[2519]: E0702 09:26:46.112061 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:46.435545 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jul 2 09:26:46.453555 systemd-networkd[1374]: lxc_health: Link UP Jul 2 09:26:46.460301 systemd-networkd[1374]: lxc_health: Gained carrier Jul 2 09:26:46.840799 systemd-networkd[1374]: lxca7d5c87f426f: Link UP Jul 2 09:26:46.848190 kernel: eth0: renamed from tmp72db8 Jul 2 09:26:46.855055 systemd-networkd[1374]: lxca7d5c87f426f: Gained carrier Jul 2 09:26:46.855828 systemd-networkd[1374]: lxc1e735d8386e4: Link UP Jul 2 09:26:46.877006 kernel: eth0: renamed from tmpbdcfe Jul 2 09:26:46.884515 systemd-networkd[1374]: lxc1e735d8386e4: Gained carrier Jul 2 09:26:47.140803 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jul 2 09:26:47.995098 kubelet[2519]: E0702 09:26:47.995049 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:48.114017 kubelet[2519]: E0702 09:26:48.113977 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:48.227985 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jul 2 09:26:48.291867 systemd-networkd[1374]: lxc1e735d8386e4: Gained IPv6LL Jul 2 09:26:48.675916 systemd-networkd[1374]: lxca7d5c87f426f: Gained IPv6LL Jul 2 09:26:48.789945 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:52454.service - OpenSSH per-connection server daemon (10.0.0.1:52454). Jul 2 09:26:48.827865 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 52454 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:48.829337 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:48.834047 systemd-logind[1418]: New session 9 of user core. Jul 2 09:26:48.838554 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 09:26:48.963713 sshd[3758]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:48.969093 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Jul 2 09:26:48.971272 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:52454.service: Deactivated successfully. Jul 2 09:26:48.973170 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 09:26:48.976137 systemd-logind[1418]: Removed session 9. Jul 2 09:26:50.411442 containerd[1432]: time="2024-07-02T09:26:50.411293365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:50.411442 containerd[1432]: time="2024-07-02T09:26:50.411356413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:50.411442 containerd[1432]: time="2024-07-02T09:26:50.411379976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:50.411442 containerd[1432]: time="2024-07-02T09:26:50.411419541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:50.413418 containerd[1432]: time="2024-07-02T09:26:50.413042822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:26:50.413418 containerd[1432]: time="2024-07-02T09:26:50.413107350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:50.413418 containerd[1432]: time="2024-07-02T09:26:50.413122351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:26:50.413418 containerd[1432]: time="2024-07-02T09:26:50.413143634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:26:50.432565 systemd[1]: Started cri-containerd-bdcfeec33e69a6dd035f5fac41641218c0aa2ee0e35827e375ec642674c27c64.scope - libcontainer container bdcfeec33e69a6dd035f5fac41641218c0aa2ee0e35827e375ec642674c27c64. Jul 2 09:26:50.439647 systemd[1]: Started cri-containerd-72db8f92b3cbf1ec7775e56dab8c4e76357660d31c06df2e53d44ddd69c5f341.scope - libcontainer container 72db8f92b3cbf1ec7775e56dab8c4e76357660d31c06df2e53d44ddd69c5f341. Jul 2 09:26:50.446833 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:26:50.453632 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:26:50.464158 containerd[1432]: time="2024-07-02T09:26:50.464068168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8zzz,Uid:d88968a2-cd9b-4635-a305-0c4bedf1b070,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdcfeec33e69a6dd035f5fac41641218c0aa2ee0e35827e375ec642674c27c64\"" Jul 2 09:26:50.466102 kubelet[2519]: E0702 09:26:50.466079 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:50.469613 containerd[1432]: time="2024-07-02T09:26:50.469351101Z" level=info msg="CreateContainer within sandbox \"bdcfeec33e69a6dd035f5fac41641218c0aa2ee0e35827e375ec642674c27c64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:26:50.482182 containerd[1432]: time="2024-07-02T09:26:50.482123000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkclj,Uid:2945d988-5928-46ae-8e4b-9fe80a83c811,Namespace:kube-system,Attempt:0,} returns sandbox id \"72db8f92b3cbf1ec7775e56dab8c4e76357660d31c06df2e53d44ddd69c5f341\"" Jul 2 09:26:50.482945 kubelet[2519]: E0702 09:26:50.482916 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:50.512110 containerd[1432]: time="2024-07-02T09:26:50.512052499Z" level=info msg="CreateContainer within sandbox \"72db8f92b3cbf1ec7775e56dab8c4e76357660d31c06df2e53d44ddd69c5f341\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:26:50.516122 containerd[1432]: time="2024-07-02T09:26:50.516070436Z" level=info msg="CreateContainer within sandbox \"bdcfeec33e69a6dd035f5fac41641218c0aa2ee0e35827e375ec642674c27c64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47d5e9c87c124fbfea2d91285d0a99b7edd7789d18b6198953c955c770b0d703\"" Jul 2 09:26:50.516622 containerd[1432]: time="2024-07-02T09:26:50.516597621Z" level=info msg="StartContainer for \"47d5e9c87c124fbfea2d91285d0a99b7edd7789d18b6198953c955c770b0d703\"" Jul 2 09:26:50.527745 containerd[1432]: time="2024-07-02T09:26:50.527695073Z" level=info msg="CreateContainer within sandbox \"72db8f92b3cbf1ec7775e56dab8c4e76357660d31c06df2e53d44ddd69c5f341\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8f86dbcde1c8dae3b2c2b2c7ec7e739520639167bb11336659d6f9f33955ce5\"" Jul 2 09:26:50.528552 containerd[1432]: time="2024-07-02T09:26:50.528524895Z" level=info msg="StartContainer for \"c8f86dbcde1c8dae3b2c2b2c7ec7e739520639167bb11336659d6f9f33955ce5\"" Jul 2 09:26:50.541549 systemd[1]: Started cri-containerd-47d5e9c87c124fbfea2d91285d0a99b7edd7789d18b6198953c955c770b0d703.scope - libcontainer container 47d5e9c87c124fbfea2d91285d0a99b7edd7789d18b6198953c955c770b0d703. Jul 2 09:26:50.558546 systemd[1]: Started cri-containerd-c8f86dbcde1c8dae3b2c2b2c7ec7e739520639167bb11336659d6f9f33955ce5.scope - libcontainer container c8f86dbcde1c8dae3b2c2b2c7ec7e739520639167bb11336659d6f9f33955ce5. Jul 2 09:26:50.586682 containerd[1432]: time="2024-07-02T09:26:50.586578790Z" level=info msg="StartContainer for \"47d5e9c87c124fbfea2d91285d0a99b7edd7789d18b6198953c955c770b0d703\" returns successfully" Jul 2 09:26:50.602695 containerd[1432]: time="2024-07-02T09:26:50.602656098Z" level=info msg="StartContainer for \"c8f86dbcde1c8dae3b2c2b2c7ec7e739520639167bb11336659d6f9f33955ce5\" returns successfully" Jul 2 09:26:51.124226 kubelet[2519]: E0702 09:26:51.124183 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:51.126336 kubelet[2519]: E0702 09:26:51.126303 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:51.132461 kubelet[2519]: I0702 09:26:51.132400 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z8zzz" podStartSLOduration=26.132377105 podStartE2EDuration="26.132377105s" podCreationTimestamp="2024-07-02 09:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:51.132142196 +0000 UTC m=+40.188659203" watchObservedRunningTime="2024-07-02 09:26:51.132377105 +0000 UTC m=+40.188894112" Jul 2 09:26:51.142559 kubelet[2519]: I0702 09:26:51.142363 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pkclj" podStartSLOduration=26.142349659 podStartE2EDuration="26.142349659s" podCreationTimestamp="2024-07-02 09:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:26:51.141919047 +0000 UTC m=+40.198436054" watchObservedRunningTime="2024-07-02 09:26:51.142349659 +0000 UTC m=+40.198866666" Jul 2 09:26:52.126898 kubelet[2519]: E0702 09:26:52.126869 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:52.127246 kubelet[2519]: E0702 09:26:52.126923 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:53.128357 kubelet[2519]: E0702 09:26:53.128210 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:53.128357 kubelet[2519]: E0702 09:26:53.128287 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:26:53.978889 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:43970.service - OpenSSH per-connection server daemon (10.0.0.1:43970). Jul 2 09:26:54.017613 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 43970 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:54.018975 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:54.022146 systemd-logind[1418]: New session 10 of user core. Jul 2 09:26:54.033594 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:26:54.167164 sshd[3949]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:54.178950 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:43970.service: Deactivated successfully. Jul 2 09:26:54.181623 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:26:54.183102 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:26:54.190631 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:43972.service - OpenSSH per-connection server daemon (10.0.0.1:43972). Jul 2 09:26:54.191743 systemd-logind[1418]: Removed session 10. Jul 2 09:26:54.220143 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 43972 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:54.221010 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:54.224412 systemd-logind[1418]: New session 11 of user core. Jul 2 09:26:54.232542 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:26:54.404792 sshd[3964]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:54.415138 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:43972.service: Deactivated successfully. Jul 2 09:26:54.417007 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:26:54.418880 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:26:54.430713 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). Jul 2 09:26:54.431500 systemd-logind[1418]: Removed session 11. Jul 2 09:26:54.459904 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:54.461146 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:54.464664 systemd-logind[1418]: New session 12 of user core. Jul 2 09:26:54.475527 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:26:54.581824 sshd[3977]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:54.584802 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:43988.service: Deactivated successfully. Jul 2 09:26:54.586588 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:26:54.587235 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:26:54.588080 systemd-logind[1418]: Removed session 12. Jul 2 09:26:59.592861 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:43994.service - OpenSSH per-connection server daemon (10.0.0.1:43994). Jul 2 09:26:59.624784 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 43994 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:26:59.625970 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:26:59.629456 systemd-logind[1418]: New session 13 of user core. Jul 2 09:26:59.634546 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:26:59.744445 sshd[3994]: pam_unix(sshd:session): session closed for user core Jul 2 09:26:59.748164 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:43994.service: Deactivated successfully. Jul 2 09:26:59.750889 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:26:59.751752 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:26:59.752598 systemd-logind[1418]: Removed session 13. Jul 2 09:27:04.754919 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:52970.service - OpenSSH per-connection server daemon (10.0.0.1:52970). Jul 2 09:27:04.786800 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 52970 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:04.787910 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:04.791460 systemd-logind[1418]: New session 14 of user core. Jul 2 09:27:04.803599 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:27:04.910869 sshd[4008]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:04.920540 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:52970.service: Deactivated successfully. Jul 2 09:27:04.921874 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:27:04.924661 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:27:04.925597 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:52980.service - OpenSSH per-connection server daemon (10.0.0.1:52980). Jul 2 09:27:04.926188 systemd-logind[1418]: Removed session 14. Jul 2 09:27:04.957731 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 52980 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:04.958854 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:04.962037 systemd-logind[1418]: New session 15 of user core. Jul 2 09:27:04.969507 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:27:05.160229 sshd[4022]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:05.166859 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:52980.service: Deactivated successfully. Jul 2 09:27:05.168265 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:27:05.169512 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:27:05.170611 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:52992.service - OpenSSH per-connection server daemon (10.0.0.1:52992). Jul 2 09:27:05.171195 systemd-logind[1418]: Removed session 15. Jul 2 09:27:05.206702 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 52992 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:05.207765 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:05.211035 systemd-logind[1418]: New session 16 of user core. Jul 2 09:27:05.217509 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:27:06.382287 sshd[4035]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:06.389679 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:52992.service: Deactivated successfully. Jul 2 09:27:06.391982 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:27:06.394786 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:27:06.401686 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:53004.service - OpenSSH per-connection server daemon (10.0.0.1:53004). Jul 2 09:27:06.403026 systemd-logind[1418]: Removed session 16. Jul 2 09:27:06.432348 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 53004 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:06.432943 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:06.441273 systemd-logind[1418]: New session 17 of user core. Jul 2 09:27:06.450557 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:27:06.684051 sshd[4058]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:06.694682 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:53004.service: Deactivated successfully. Jul 2 09:27:06.696240 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:27:06.699543 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:27:06.706774 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:53008.service - OpenSSH per-connection server daemon (10.0.0.1:53008). Jul 2 09:27:06.707653 systemd-logind[1418]: Removed session 17. Jul 2 09:27:06.740051 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 53008 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:06.741615 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:06.745353 systemd-logind[1418]: New session 18 of user core. Jul 2 09:27:06.754608 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:27:06.865412 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:06.869671 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:53008.service: Deactivated successfully. Jul 2 09:27:06.871467 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:27:06.872138 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:27:06.873165 systemd-logind[1418]: Removed session 18. Jul 2 09:27:11.876013 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:47450.service - OpenSSH per-connection server daemon (10.0.0.1:47450). Jul 2 09:27:11.913096 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 47450 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:11.914485 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:11.918436 systemd-logind[1418]: New session 19 of user core. Jul 2 09:27:11.930546 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:27:12.036361 sshd[4088]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:12.039448 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:47450.service: Deactivated successfully. Jul 2 09:27:12.043011 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:27:12.043656 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:27:12.044717 systemd-logind[1418]: Removed session 19. Jul 2 09:27:17.050814 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:47460.service - OpenSSH per-connection server daemon (10.0.0.1:47460). Jul 2 09:27:17.082282 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 47460 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:17.083380 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:17.087366 systemd-logind[1418]: New session 20 of user core. Jul 2 09:27:17.097525 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:27:17.199650 sshd[4105]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:17.202916 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:47460.service: Deactivated successfully. Jul 2 09:27:17.206533 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:27:17.207140 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:27:17.208060 systemd-logind[1418]: Removed session 20. Jul 2 09:27:22.211112 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:34540.service - OpenSSH per-connection server daemon (10.0.0.1:34540). Jul 2 09:27:22.242684 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 34540 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:22.243817 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:22.247862 systemd-logind[1418]: New session 21 of user core. Jul 2 09:27:22.256608 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:27:22.360219 sshd[4120]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:22.363044 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:34540.service: Deactivated successfully. Jul 2 09:27:22.364652 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:27:22.366155 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:27:22.366950 systemd-logind[1418]: Removed session 21. Jul 2 09:27:27.370925 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:34550.service - OpenSSH per-connection server daemon (10.0.0.1:34550). Jul 2 09:27:27.402409 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 34550 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:27.403485 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:27.408100 systemd-logind[1418]: New session 22 of user core. Jul 2 09:27:27.415539 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:27:27.519074 sshd[4140]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:27.531895 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:34550.service: Deactivated successfully. Jul 2 09:27:27.534559 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:27:27.535929 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:27:27.537336 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:34552.service - OpenSSH per-connection server daemon (10.0.0.1:34552). Jul 2 09:27:27.538245 systemd-logind[1418]: Removed session 22. Jul 2 09:27:27.568918 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 34552 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:27.570072 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:27.573439 systemd-logind[1418]: New session 23 of user core. Jul 2 09:27:27.579647 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:27:29.059767 containerd[1432]: time="2024-07-02T09:27:29.059715093Z" level=info msg="StopContainer for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" with timeout 30 (s)" Jul 2 09:27:29.068152 containerd[1432]: time="2024-07-02T09:27:29.067708540Z" level=info msg="Stop container \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" with signal terminated" Jul 2 09:27:29.076615 systemd[1]: cri-containerd-31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797.scope: Deactivated successfully. Jul 2 09:27:29.092109 containerd[1432]: time="2024-07-02T09:27:29.092053372Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:27:29.095476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797-rootfs.mount: Deactivated successfully. Jul 2 09:27:29.098717 containerd[1432]: time="2024-07-02T09:27:29.098563206Z" level=info msg="StopContainer for \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\" with timeout 2 (s)" Jul 2 09:27:29.099037 containerd[1432]: time="2024-07-02T09:27:29.098851496Z" level=info msg="Stop container \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\" with signal terminated" Jul 2 09:27:29.103852 containerd[1432]: time="2024-07-02T09:27:29.103748591Z" level=info msg="shim disconnected" id=31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797 namespace=k8s.io Jul 2 09:27:29.103852 containerd[1432]: time="2024-07-02T09:27:29.103807034Z" level=warning msg="cleaning up after shim disconnected" id=31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797 namespace=k8s.io Jul 2 09:27:29.103852 containerd[1432]: time="2024-07-02T09:27:29.103816314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:29.106030 systemd-networkd[1374]: lxc_health: Link DOWN Jul 2 09:27:29.106036 systemd-networkd[1374]: lxc_health: Lost carrier Jul 2 09:27:29.120109 containerd[1432]: time="2024-07-02T09:27:29.120005374Z" level=info msg="StopContainer for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" returns successfully" Jul 2 09:27:29.124724 containerd[1432]: time="2024-07-02T09:27:29.124672501Z" level=info msg="StopPodSandbox for \"5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464\"" Jul 2 09:27:29.124828 containerd[1432]: time="2024-07-02T09:27:29.124735503Z" level=info msg="Container to stop \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:27:29.127180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464-shm.mount: Deactivated successfully. Jul 2 09:27:29.128649 systemd[1]: cri-containerd-6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191.scope: Deactivated successfully. Jul 2 09:27:29.128901 systemd[1]: cri-containerd-6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191.scope: Consumed 6.424s CPU time. Jul 2 09:27:29.134216 systemd[1]: cri-containerd-5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464.scope: Deactivated successfully. Jul 2 09:27:29.149497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191-rootfs.mount: Deactivated successfully. Jul 2 09:27:29.157426 containerd[1432]: time="2024-07-02T09:27:29.156514442Z" level=info msg="shim disconnected" id=5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464 namespace=k8s.io Jul 2 09:27:29.157426 containerd[1432]: time="2024-07-02T09:27:29.156571804Z" level=warning msg="cleaning up after shim disconnected" id=5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464 namespace=k8s.io Jul 2 09:27:29.157426 containerd[1432]: time="2024-07-02T09:27:29.156580885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:29.157426 containerd[1432]: time="2024-07-02T09:27:29.156581645Z" level=info msg="shim disconnected" id=6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191 namespace=k8s.io Jul 2 09:27:29.157426 containerd[1432]: time="2024-07-02T09:27:29.156638527Z" level=warning msg="cleaning up after shim disconnected" id=6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191 namespace=k8s.io Jul 2 09:27:29.157426 containerd[1432]: time="2024-07-02T09:27:29.156671328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:29.156816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464-rootfs.mount: Deactivated successfully. Jul 2 09:27:29.166855 containerd[1432]: time="2024-07-02T09:27:29.166781130Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:27:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 09:27:29.167975 containerd[1432]: time="2024-07-02T09:27:29.167946932Z" level=info msg="TearDown network for sandbox \"5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464\" successfully" Jul 2 09:27:29.168029 containerd[1432]: time="2024-07-02T09:27:29.167975533Z" level=info msg="StopPodSandbox for \"5ba300f8e62b4e7847b6cf210e3d07b9efbf6e19adc2f616a03deb0fddd16464\" returns successfully" Jul 2 09:27:29.177795 containerd[1432]: time="2024-07-02T09:27:29.177744763Z" level=info msg="StopContainer for \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\" returns successfully" Jul 2 09:27:29.178149 containerd[1432]: time="2024-07-02T09:27:29.178087335Z" level=info msg="StopPodSandbox for \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\"" Jul 2 09:27:29.178192 containerd[1432]: time="2024-07-02T09:27:29.178129217Z" level=info msg="Container to stop \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:27:29.178192 containerd[1432]: time="2024-07-02T09:27:29.178163538Z" level=info msg="Container to stop \"1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:27:29.178192 containerd[1432]: time="2024-07-02T09:27:29.178172698Z" level=info msg="Container to stop \"a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:27:29.178192 containerd[1432]: time="2024-07-02T09:27:29.178181619Z" level=info msg="Container to stop \"6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:27:29.178192 containerd[1432]: time="2024-07-02T09:27:29.178190339Z" level=info msg="Container to stop \"d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 09:27:29.183964 systemd[1]: cri-containerd-2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8.scope: Deactivated successfully. Jul 2 09:27:29.193163 kubelet[2519]: I0702 09:27:29.193129 2519 scope.go:117] "RemoveContainer" containerID="31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797" Jul 2 09:27:29.195010 containerd[1432]: time="2024-07-02T09:27:29.194975741Z" level=info msg="RemoveContainer for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\"" Jul 2 09:27:29.198765 containerd[1432]: time="2024-07-02T09:27:29.198725355Z" level=info msg="RemoveContainer for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" returns successfully" Jul 2 09:27:29.198998 kubelet[2519]: I0702 09:27:29.198932 2519 scope.go:117] "RemoveContainer" containerID="31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797" Jul 2 09:27:29.202790 containerd[1432]: time="2024-07-02T09:27:29.199188372Z" level=error msg="ContainerStatus for \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\": not found" Jul 2 09:27:29.202871 kubelet[2519]: I0702 09:27:29.200631 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mxst\" (UniqueName: \"kubernetes.io/projected/fc233a06-da13-47b1-aecb-275e26d2fba7-kube-api-access-6mxst\") pod \"fc233a06-da13-47b1-aecb-275e26d2fba7\" (UID: \"fc233a06-da13-47b1-aecb-275e26d2fba7\") " Jul 2 09:27:29.202871 kubelet[2519]: I0702 09:27:29.200675 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc233a06-da13-47b1-aecb-275e26d2fba7-cilium-config-path\") pod \"fc233a06-da13-47b1-aecb-275e26d2fba7\" (UID: \"fc233a06-da13-47b1-aecb-275e26d2fba7\") " Jul 2 09:27:29.202871 kubelet[2519]: I0702 09:27:29.202479 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc233a06-da13-47b1-aecb-275e26d2fba7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc233a06-da13-47b1-aecb-275e26d2fba7" (UID: "fc233a06-da13-47b1-aecb-275e26d2fba7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:27:29.203115 kubelet[2519]: E0702 09:27:29.203089 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\": not found" containerID="31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797" Jul 2 09:27:29.203201 kubelet[2519]: I0702 09:27:29.203123 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797"} err="failed to get container status \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\": rpc error: code = NotFound desc = an error occurred when try to find container \"31e9797d2f3f9335591dc822b1bf8b52ff66326fdbf4b693f9a533ffee62e797\": not found" Jul 2 09:27:29.206412 kubelet[2519]: I0702 09:27:29.206355 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc233a06-da13-47b1-aecb-275e26d2fba7-kube-api-access-6mxst" (OuterVolumeSpecName: "kube-api-access-6mxst") pod "fc233a06-da13-47b1-aecb-275e26d2fba7" (UID: "fc233a06-da13-47b1-aecb-275e26d2fba7"). InnerVolumeSpecName "kube-api-access-6mxst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:27:29.218146 containerd[1432]: time="2024-07-02T09:27:29.218028167Z" level=info msg="shim disconnected" id=2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8 namespace=k8s.io Jul 2 09:27:29.218146 containerd[1432]: time="2024-07-02T09:27:29.218143731Z" level=warning msg="cleaning up after shim disconnected" id=2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8 namespace=k8s.io Jul 2 09:27:29.218289 containerd[1432]: time="2024-07-02T09:27:29.218152891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:29.229907 containerd[1432]: time="2024-07-02T09:27:29.229863191Z" level=info msg="TearDown network for sandbox \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" successfully" Jul 2 09:27:29.229907 containerd[1432]: time="2024-07-02T09:27:29.229899672Z" level=info msg="StopPodSandbox for \"2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8\" returns successfully" Jul 2 09:27:29.302475 kubelet[2519]: I0702 09:27:29.301824 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-cgroup\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302475 kubelet[2519]: I0702 09:27:29.301862 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtzl\" (UniqueName: \"kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302475 kubelet[2519]: I0702 09:27:29.301899 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-run\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302475 kubelet[2519]: I0702 09:27:29.301918 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-config-path\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302475 kubelet[2519]: I0702 09:27:29.301932 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cni-path\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302475 kubelet[2519]: I0702 09:27:29.301946 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-net\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302764 kubelet[2519]: I0702 09:27:29.301961 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-etc-cni-netd\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302764 kubelet[2519]: I0702 09:27:29.301974 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-bpf-maps\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302764 kubelet[2519]: I0702 09:27:29.301988 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-lib-modules\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302764 kubelet[2519]: I0702 09:27:29.302004 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-hubble-tls\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302764 kubelet[2519]: I0702 09:27:29.302026 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c03fc50d-b353-4b69-81b3-1c55a57d9100-clustermesh-secrets\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302764 kubelet[2519]: I0702 09:27:29.302044 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-xtables-lock\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302887 kubelet[2519]: I0702 09:27:29.302058 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-hostproc\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302887 kubelet[2519]: I0702 09:27:29.302072 2519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-kernel\") pod \"c03fc50d-b353-4b69-81b3-1c55a57d9100\" (UID: \"c03fc50d-b353-4b69-81b3-1c55a57d9100\") " Jul 2 09:27:29.302887 kubelet[2519]: I0702 09:27:29.302101 2519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6mxst\" (UniqueName: \"kubernetes.io/projected/fc233a06-da13-47b1-aecb-275e26d2fba7-kube-api-access-6mxst\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.302887 kubelet[2519]: I0702 09:27:29.302111 2519 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc233a06-da13-47b1-aecb-275e26d2fba7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.302887 kubelet[2519]: I0702 09:27:29.302155 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.302887 kubelet[2519]: I0702 09:27:29.302183 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303005 kubelet[2519]: I0702 09:27:29.302448 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303005 kubelet[2519]: I0702 09:27:29.302482 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303317 kubelet[2519]: I0702 09:27:29.303282 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303432 kubelet[2519]: I0702 09:27:29.303416 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-hostproc" (OuterVolumeSpecName: "hostproc") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303494 kubelet[2519]: I0702 09:27:29.303438 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303561 kubelet[2519]: I0702 09:27:29.303548 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303640 kubelet[2519]: I0702 09:27:29.303604 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cni-path" (OuterVolumeSpecName: "cni-path") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.303715 kubelet[2519]: I0702 09:27:29.303617 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 09:27:29.304612 kubelet[2519]: I0702 09:27:29.304575 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 09:27:29.305226 kubelet[2519]: I0702 09:27:29.304790 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03fc50d-b353-4b69-81b3-1c55a57d9100-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 09:27:29.305439 kubelet[2519]: I0702 09:27:29.305383 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl" (OuterVolumeSpecName: "kube-api-access-pxtzl") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "kube-api-access-pxtzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:27:29.305779 kubelet[2519]: I0702 09:27:29.305744 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c03fc50d-b353-4b69-81b3-1c55a57d9100" (UID: "c03fc50d-b353-4b69-81b3-1c55a57d9100"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403008 2519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pxtzl\" (UniqueName: \"kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-kube-api-access-pxtzl\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403039 2519 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403049 2519 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403059 2519 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403068 2519 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403075 2519 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403083 2519 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403142 kubelet[2519]: I0702 09:27:29.403092 2519 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403411 kubelet[2519]: I0702 09:27:29.403100 2519 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c03fc50d-b353-4b69-81b3-1c55a57d9100-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403411 kubelet[2519]: I0702 09:27:29.403107 2519 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c03fc50d-b353-4b69-81b3-1c55a57d9100-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403411 kubelet[2519]: I0702 09:27:29.403115 2519 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403411 kubelet[2519]: I0702 09:27:29.403122 2519 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403411 kubelet[2519]: I0702 09:27:29.403129 2519 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.403411 kubelet[2519]: I0702 09:27:29.403136 2519 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c03fc50d-b353-4b69-81b3-1c55a57d9100-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 09:27:29.498082 systemd[1]: Removed slice kubepods-besteffort-podfc233a06_da13_47b1_aecb_275e26d2fba7.slice - libcontainer container kubepods-besteffort-podfc233a06_da13_47b1_aecb_275e26d2fba7.slice. Jul 2 09:27:30.079935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8-rootfs.mount: Deactivated successfully. Jul 2 09:27:30.080038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2279747993e05f778f46789ccf4003d0651b6f9905f7db639862f87fa8a046d8-shm.mount: Deactivated successfully. Jul 2 09:27:30.080101 systemd[1]: var-lib-kubelet-pods-c03fc50d\x2db353\x2d4b69\x2d81b3\x2d1c55a57d9100-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxtzl.mount: Deactivated successfully. Jul 2 09:27:30.080153 systemd[1]: var-lib-kubelet-pods-fc233a06\x2dda13\x2d47b1\x2daecb\x2d275e26d2fba7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6mxst.mount: Deactivated successfully. Jul 2 09:27:30.080201 systemd[1]: var-lib-kubelet-pods-c03fc50d\x2db353\x2d4b69\x2d81b3\x2d1c55a57d9100-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 09:27:30.080249 systemd[1]: var-lib-kubelet-pods-c03fc50d\x2db353\x2d4b69\x2d81b3\x2d1c55a57d9100-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 09:27:30.197271 kubelet[2519]: I0702 09:27:30.197235 2519 scope.go:117] "RemoveContainer" containerID="6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191" Jul 2 09:27:30.199755 containerd[1432]: time="2024-07-02T09:27:30.199714083Z" level=info msg="RemoveContainer for \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\"" Jul 2 09:27:30.202073 systemd[1]: Removed slice kubepods-burstable-podc03fc50d_b353_4b69_81b3_1c55a57d9100.slice - libcontainer container kubepods-burstable-podc03fc50d_b353_4b69_81b3_1c55a57d9100.slice. Jul 2 09:27:30.202177 systemd[1]: kubepods-burstable-podc03fc50d_b353_4b69_81b3_1c55a57d9100.slice: Consumed 6.561s CPU time. Jul 2 09:27:30.203297 containerd[1432]: time="2024-07-02T09:27:30.203262046Z" level=info msg="RemoveContainer for \"6e2a5f1536813d1834df9f82b0e5598efa9bd771c4144b131b7120e6e4ae9191\" returns successfully" Jul 2 09:27:30.203456 kubelet[2519]: I0702 09:27:30.203433 2519 scope.go:117] "RemoveContainer" containerID="6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d" Jul 2 09:27:30.204463 containerd[1432]: time="2024-07-02T09:27:30.204200879Z" level=info msg="RemoveContainer for \"6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d\"" Jul 2 09:27:30.206952 containerd[1432]: time="2024-07-02T09:27:30.206876212Z" level=info msg="RemoveContainer for \"6c121a26e510c48ff91003bec217c54f6f1494a0b9067789d87077c29365c23d\" returns successfully" Jul 2 09:27:30.207058 kubelet[2519]: I0702 09:27:30.207031 2519 scope.go:117] "RemoveContainer" containerID="a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb" Jul 2 09:27:30.207999 containerd[1432]: time="2024-07-02T09:27:30.207968930Z" level=info msg="RemoveContainer for \"a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb\"" Jul 2 09:27:30.211743 containerd[1432]: time="2024-07-02T09:27:30.211262604Z" level=info msg="RemoveContainer for \"a595d2fdb476bc3ca57278b442cc38ee0007a1f177af381fda7f088817547afb\" returns successfully" Jul 2 09:27:30.211814 kubelet[2519]: I0702 09:27:30.211509 2519 scope.go:117] "RemoveContainer" containerID="d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd" Jul 2 09:27:30.213476 containerd[1432]: time="2024-07-02T09:27:30.213203791Z" level=info msg="RemoveContainer for \"d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd\"" Jul 2 09:27:30.215616 containerd[1432]: time="2024-07-02T09:27:30.215561633Z" level=info msg="RemoveContainer for \"d67e8be5cacaffcf1b4dd65b20f18054992060a8e9519dc43ed2302dc6a511bd\" returns successfully" Jul 2 09:27:30.215875 kubelet[2519]: I0702 09:27:30.215848 2519 scope.go:117] "RemoveContainer" containerID="1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa" Jul 2 09:27:30.216718 containerd[1432]: time="2024-07-02T09:27:30.216682712Z" level=info msg="RemoveContainer for \"1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa\"" Jul 2 09:27:30.218980 containerd[1432]: time="2024-07-02T09:27:30.218946431Z" level=info msg="RemoveContainer for \"1fd2cfcb276a8d3446286c822bbbce0c621586e470304f29d85fdf19c5a4e5aa\" returns successfully" Jul 2 09:27:31.028511 sshd[4154]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:31.035495 kubelet[2519]: I0702 09:27:31.034981 2519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" path="/var/lib/kubelet/pods/c03fc50d-b353-4b69-81b3-1c55a57d9100/volumes" Jul 2 09:27:31.035765 kubelet[2519]: I0702 09:27:31.035745 2519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc233a06-da13-47b1-aecb-275e26d2fba7" path="/var/lib/kubelet/pods/fc233a06-da13-47b1-aecb-275e26d2fba7/volumes" Jul 2 09:27:31.036229 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:34552.service: Deactivated successfully. Jul 2 09:27:31.038895 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:27:31.040234 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:27:31.047725 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). Jul 2 09:27:31.048666 systemd-logind[1418]: Removed session 23. Jul 2 09:27:31.076924 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:31.077621 kubelet[2519]: E0702 09:27:31.077590 2519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 09:27:31.078122 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:31.081999 systemd-logind[1418]: New session 24 of user core. Jul 2 09:27:31.087514 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:27:32.636321 sshd[4317]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:32.647408 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:49288.service: Deactivated successfully. Jul 2 09:27:32.649890 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:27:32.651467 systemd[1]: session-24.scope: Consumed 1.476s CPU time. Jul 2 09:27:32.654354 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:27:32.654608 kubelet[2519]: I0702 09:27:32.654383 2519 topology_manager.go:215] "Topology Admit Handler" podUID="568dfbde-c46f-40f9-967a-141680e72734" podNamespace="kube-system" podName="cilium-dm6qc" Jul 2 09:27:32.654608 kubelet[2519]: E0702 09:27:32.654559 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" containerName="mount-bpf-fs" Jul 2 09:27:32.654608 kubelet[2519]: E0702 09:27:32.654571 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" containerName="clean-cilium-state" Jul 2 09:27:32.654608 kubelet[2519]: E0702 09:27:32.654577 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" containerName="cilium-agent" Jul 2 09:27:32.654608 kubelet[2519]: E0702 09:27:32.654583 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc233a06-da13-47b1-aecb-275e26d2fba7" containerName="cilium-operator" Jul 2 09:27:32.654608 kubelet[2519]: E0702 09:27:32.654589 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" containerName="mount-cgroup" Jul 2 09:27:32.654608 kubelet[2519]: E0702 09:27:32.654595 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" containerName="apply-sysctl-overwrites" Jul 2 09:27:32.654608 kubelet[2519]: I0702 09:27:32.654616 2519 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc233a06-da13-47b1-aecb-275e26d2fba7" containerName="cilium-operator" Jul 2 09:27:32.654948 kubelet[2519]: I0702 09:27:32.654622 2519 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03fc50d-b353-4b69-81b3-1c55a57d9100" containerName="cilium-agent" Jul 2 09:27:32.664700 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:49294.service - OpenSSH per-connection server daemon (10.0.0.1:49294). Jul 2 09:27:32.666049 systemd-logind[1418]: Removed session 24. Jul 2 09:27:32.674722 systemd[1]: Created slice kubepods-burstable-pod568dfbde_c46f_40f9_967a_141680e72734.slice - libcontainer container kubepods-burstable-pod568dfbde_c46f_40f9_967a_141680e72734.slice. Jul 2 09:27:32.700505 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 49294 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:32.701702 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:32.705189 systemd-logind[1418]: New session 25 of user core. Jul 2 09:27:32.712589 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:27:32.722789 kubelet[2519]: I0702 09:27:32.722454 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-hostproc\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722789 kubelet[2519]: I0702 09:27:32.722490 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-cilium-cgroup\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722789 kubelet[2519]: I0702 09:27:32.722510 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-host-proc-sys-net\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722789 kubelet[2519]: I0702 09:27:32.722525 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-cilium-run\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722789 kubelet[2519]: I0702 09:27:32.722541 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-etc-cni-netd\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722789 kubelet[2519]: I0702 09:27:32.722556 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-xtables-lock\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722985 kubelet[2519]: I0702 09:27:32.722570 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/568dfbde-c46f-40f9-967a-141680e72734-cilium-config-path\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722985 kubelet[2519]: I0702 09:27:32.722583 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/568dfbde-c46f-40f9-967a-141680e72734-cilium-ipsec-secrets\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722985 kubelet[2519]: I0702 09:27:32.722598 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-cni-path\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722985 kubelet[2519]: I0702 09:27:32.722612 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-bpf-maps\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722985 kubelet[2519]: I0702 09:27:32.722625 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-lib-modules\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.722985 kubelet[2519]: I0702 09:27:32.722644 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/568dfbde-c46f-40f9-967a-141680e72734-hubble-tls\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.723107 kubelet[2519]: I0702 09:27:32.722659 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/568dfbde-c46f-40f9-967a-141680e72734-clustermesh-secrets\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.723107 kubelet[2519]: I0702 09:27:32.722673 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/568dfbde-c46f-40f9-967a-141680e72734-host-proc-sys-kernel\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.723107 kubelet[2519]: I0702 09:27:32.722688 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpmmt\" (UniqueName: \"kubernetes.io/projected/568dfbde-c46f-40f9-967a-141680e72734-kube-api-access-dpmmt\") pod \"cilium-dm6qc\" (UID: \"568dfbde-c46f-40f9-967a-141680e72734\") " pod="kube-system/cilium-dm6qc" Jul 2 09:27:32.765801 sshd[4331]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:32.777154 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:49294.service: Deactivated successfully. Jul 2 09:27:32.779707 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:27:32.782554 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:27:32.790680 systemd[1]: Started sshd@25-10.0.0.151:22-10.0.0.1:49310.service - OpenSSH per-connection server daemon (10.0.0.1:49310). Jul 2 09:27:32.791752 systemd-logind[1418]: Removed session 25. Jul 2 09:27:32.818941 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 49310 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:27:32.820067 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:27:32.824637 systemd-logind[1418]: New session 26 of user core. Jul 2 09:27:32.832569 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:27:32.978903 kubelet[2519]: E0702 09:27:32.978764 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:32.980649 containerd[1432]: time="2024-07-02T09:27:32.980323397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dm6qc,Uid:568dfbde-c46f-40f9-967a-141680e72734,Namespace:kube-system,Attempt:0,}" Jul 2 09:27:32.998646 containerd[1432]: time="2024-07-02T09:27:32.998144338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:27:32.998646 containerd[1432]: time="2024-07-02T09:27:32.998211700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:27:32.998646 containerd[1432]: time="2024-07-02T09:27:32.998239221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:27:32.998646 containerd[1432]: time="2024-07-02T09:27:32.998253741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:27:33.018565 systemd[1]: Started cri-containerd-b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34.scope - libcontainer container b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34. Jul 2 09:27:33.034297 kubelet[2519]: E0702 09:27:33.033123 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:33.039036 containerd[1432]: time="2024-07-02T09:27:33.038996751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dm6qc,Uid:568dfbde-c46f-40f9-967a-141680e72734,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\"" Jul 2 09:27:33.039522 kubelet[2519]: E0702 09:27:33.039504 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:33.042505 containerd[1432]: time="2024-07-02T09:27:33.042448140Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 09:27:33.052295 containerd[1432]: time="2024-07-02T09:27:33.052202647Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896\"" Jul 2 09:27:33.052665 containerd[1432]: time="2024-07-02T09:27:33.052640341Z" level=info msg="StartContainer for \"961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896\"" Jul 2 09:27:33.083602 systemd[1]: Started cri-containerd-961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896.scope - libcontainer container 961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896. Jul 2 09:27:33.105428 containerd[1432]: time="2024-07-02T09:27:33.103632311Z" level=info msg="StartContainer for \"961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896\" returns successfully" Jul 2 09:27:33.138633 systemd[1]: cri-containerd-961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896.scope: Deactivated successfully. Jul 2 09:27:33.164152 containerd[1432]: time="2024-07-02T09:27:33.164078458Z" level=info msg="shim disconnected" id=961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896 namespace=k8s.io Jul 2 09:27:33.164152 containerd[1432]: time="2024-07-02T09:27:33.164137460Z" level=warning msg="cleaning up after shim disconnected" id=961415e9e6d516cc94651c9def9f652bc2b1518a35e21c9fbe75e492e3ce8896 namespace=k8s.io Jul 2 09:27:33.164152 containerd[1432]: time="2024-07-02T09:27:33.164146220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:33.210646 kubelet[2519]: E0702 09:27:33.210608 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:33.212457 containerd[1432]: time="2024-07-02T09:27:33.212418984Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 09:27:33.217406 kubelet[2519]: I0702 09:27:33.214538 2519 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T09:27:33Z","lastTransitionTime":"2024-07-02T09:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 09:27:33.235663 containerd[1432]: time="2024-07-02T09:27:33.235541114Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c\"" Jul 2 09:27:33.236764 containerd[1432]: time="2024-07-02T09:27:33.236738032Z" level=info msg="StartContainer for \"b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c\"" Jul 2 09:27:33.259642 systemd[1]: Started cri-containerd-b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c.scope - libcontainer container b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c. Jul 2 09:27:33.280917 containerd[1432]: time="2024-07-02T09:27:33.280877865Z" level=info msg="StartContainer for \"b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c\" returns successfully" Jul 2 09:27:33.284367 systemd[1]: cri-containerd-b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c.scope: Deactivated successfully. Jul 2 09:27:33.303069 containerd[1432]: time="2024-07-02T09:27:33.303019484Z" level=info msg="shim disconnected" id=b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c namespace=k8s.io Jul 2 09:27:33.303069 containerd[1432]: time="2024-07-02T09:27:33.303069485Z" level=warning msg="cleaning up after shim disconnected" id=b681626cff8886eac337f962544d579a8b9d001c1b25414f6c44087be61a453c namespace=k8s.io Jul 2 09:27:33.303229 containerd[1432]: time="2024-07-02T09:27:33.303078805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:34.213170 kubelet[2519]: E0702 09:27:34.213140 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:34.215728 containerd[1432]: time="2024-07-02T09:27:34.215692037Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 09:27:34.242992 containerd[1432]: time="2024-07-02T09:27:34.242946750Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9\"" Jul 2 09:27:34.243740 containerd[1432]: time="2024-07-02T09:27:34.243709413Z" level=info msg="StartContainer for \"a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9\"" Jul 2 09:27:34.265556 systemd[1]: Started cri-containerd-a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9.scope - libcontainer container a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9. Jul 2 09:27:34.290034 containerd[1432]: time="2024-07-02T09:27:34.289995189Z" level=info msg="StartContainer for \"a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9\" returns successfully" Jul 2 09:27:34.291802 systemd[1]: cri-containerd-a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9.scope: Deactivated successfully. Jul 2 09:27:34.311819 containerd[1432]: time="2024-07-02T09:27:34.311765574Z" level=info msg="shim disconnected" id=a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9 namespace=k8s.io Jul 2 09:27:34.311819 containerd[1432]: time="2024-07-02T09:27:34.311816816Z" level=warning msg="cleaning up after shim disconnected" id=a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9 namespace=k8s.io Jul 2 09:27:34.311994 containerd[1432]: time="2024-07-02T09:27:34.311825936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:34.834089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2caf19107e2393b3cfd57d97d5d9c858775478f221fbd063ee0a8dff5ce9ff9-rootfs.mount: Deactivated successfully. Jul 2 09:27:35.217469 kubelet[2519]: E0702 09:27:35.217035 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:35.221370 containerd[1432]: time="2024-07-02T09:27:35.221310374Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 09:27:35.233467 containerd[1432]: time="2024-07-02T09:27:35.233409493Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8\"" Jul 2 09:27:35.234004 containerd[1432]: time="2024-07-02T09:27:35.233965309Z" level=info msg="StartContainer for \"5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8\"" Jul 2 09:27:35.262545 systemd[1]: Started cri-containerd-5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8.scope - libcontainer container 5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8. Jul 2 09:27:35.280509 systemd[1]: cri-containerd-5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8.scope: Deactivated successfully. Jul 2 09:27:35.282866 containerd[1432]: time="2024-07-02T09:27:35.282817796Z" level=info msg="StartContainer for \"5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8\" returns successfully" Jul 2 09:27:35.290427 containerd[1432]: time="2024-07-02T09:27:35.288795773Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod568dfbde_c46f_40f9_967a_141680e72734.slice/cri-containerd-5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8.scope/memory.events\": no such file or directory" Jul 2 09:27:35.310849 containerd[1432]: time="2024-07-02T09:27:35.310782384Z" level=info msg="shim disconnected" id=5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8 namespace=k8s.io Jul 2 09:27:35.310849 containerd[1432]: time="2024-07-02T09:27:35.310835666Z" level=warning msg="cleaning up after shim disconnected" id=5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8 namespace=k8s.io Jul 2 09:27:35.310849 containerd[1432]: time="2024-07-02T09:27:35.310845026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:27:35.834208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c6952608a84515bb817d243658b86cda7514e45558d30937142431d744469b8-rootfs.mount: Deactivated successfully. Jul 2 09:27:36.078618 kubelet[2519]: E0702 09:27:36.078590 2519 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 09:27:36.221700 kubelet[2519]: E0702 09:27:36.220601 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:36.224286 containerd[1432]: time="2024-07-02T09:27:36.224222314Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 09:27:36.237878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356449928.mount: Deactivated successfully. Jul 2 09:27:36.241162 containerd[1432]: time="2024-07-02T09:27:36.241111559Z" level=info msg="CreateContainer within sandbox \"b2d2e5919943d7ede0b8286f990b8ecb234978e07aac4d5e9a09cef8cb591b34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"77cfe9b4a0e0f775157853c21eabf2a72bda86eeabbdbeeec3e7c3384702d51e\"" Jul 2 09:27:36.241736 containerd[1432]: time="2024-07-02T09:27:36.241701336Z" level=info msg="StartContainer for \"77cfe9b4a0e0f775157853c21eabf2a72bda86eeabbdbeeec3e7c3384702d51e\"" Jul 2 09:27:36.275525 systemd[1]: Started cri-containerd-77cfe9b4a0e0f775157853c21eabf2a72bda86eeabbdbeeec3e7c3384702d51e.scope - libcontainer container 77cfe9b4a0e0f775157853c21eabf2a72bda86eeabbdbeeec3e7c3384702d51e. Jul 2 09:27:36.296833 containerd[1432]: time="2024-07-02T09:27:36.296796597Z" level=info msg="StartContainer for \"77cfe9b4a0e0f775157853c21eabf2a72bda86eeabbdbeeec3e7c3384702d51e\" returns successfully" Jul 2 09:27:36.564512 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 09:27:36.834327 systemd[1]: run-containerd-runc-k8s.io-77cfe9b4a0e0f775157853c21eabf2a72bda86eeabbdbeeec3e7c3384702d51e-runc.o1VKgl.mount: Deactivated successfully. Jul 2 09:27:37.224678 kubelet[2519]: E0702 09:27:37.224586 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:37.237542 kubelet[2519]: I0702 09:27:37.237495 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dm6qc" podStartSLOduration=5.237467297 podStartE2EDuration="5.237467297s" podCreationTimestamp="2024-07-02 09:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:27:37.237037125 +0000 UTC m=+86.293554132" watchObservedRunningTime="2024-07-02 09:27:37.237467297 +0000 UTC m=+86.293984264" Jul 2 09:27:38.033126 kubelet[2519]: E0702 09:27:38.033067 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:38.980682 kubelet[2519]: E0702 09:27:38.980643 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:39.282696 systemd-networkd[1374]: lxc_health: Link UP Jul 2 09:27:39.289429 systemd-networkd[1374]: lxc_health: Gained carrier Jul 2 09:27:40.033553 kubelet[2519]: E0702 09:27:40.033458 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:40.982147 kubelet[2519]: E0702 09:27:40.981147 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:41.091518 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jul 2 09:27:41.231839 kubelet[2519]: E0702 09:27:41.231733 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:42.233924 kubelet[2519]: E0702 09:27:42.233879 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:27:45.452552 sshd[4340]: pam_unix(sshd:session): session closed for user core Jul 2 09:27:45.455600 systemd[1]: sshd@25-10.0.0.151:22-10.0.0.1:49310.service: Deactivated successfully. Jul 2 09:27:45.457506 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:27:45.458363 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:27:45.459177 systemd-logind[1418]: Removed session 26.