Jul 2 00:11:06.932524 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:11:06.932545 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 00:11:06.932596 kernel: KASLR enabled Jul 2 00:11:06.932603 kernel: efi: EFI v2.7 by EDK II Jul 2 00:11:06.932609 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 00:11:06.932614 kernel: random: crng init done Jul 2 00:11:06.932621 kernel: ACPI: Early table checksum verification disabled Jul 2 00:11:06.932627 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 00:11:06.932634 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:11:06.932642 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932648 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932654 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932660 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932666 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932673 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932680 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932687 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932693 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:06.932700 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 00:11:06.932706 kernel: NUMA: Failed to initialise from firmware Jul 2 00:11:06.932712 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:11:06.932719 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 2 00:11:06.932725 kernel: Zone ranges: Jul 2 00:11:06.932731 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:11:06.932737 kernel: DMA32 empty Jul 2 00:11:06.932744 kernel: Normal empty Jul 2 00:11:06.932751 kernel: Movable zone start for each node Jul 2 00:11:06.932757 kernel: Early memory node ranges Jul 2 00:11:06.932763 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 00:11:06.932769 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 00:11:06.932776 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 00:11:06.932782 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 00:11:06.932788 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 00:11:06.932794 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 00:11:06.932800 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 00:11:06.932807 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:11:06.932813 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 00:11:06.932821 kernel: psci: probing for conduit method from ACPI. Jul 2 00:11:06.932827 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:11:06.932833 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:11:06.932842 kernel: psci: Trusted OS migration not required Jul 2 00:11:06.932849 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:11:06.932856 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 00:11:06.932864 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 00:11:06.932871 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 00:11:06.932878 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 00:11:06.932884 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:11:06.932891 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:11:06.932897 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:11:06.932904 kernel: CPU features: detected: Spectre-v4 Jul 2 00:11:06.932910 kernel: CPU features: detected: Spectre-BHB Jul 2 00:11:06.932917 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:11:06.932924 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:11:06.932932 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:11:06.932939 kernel: alternatives: applying boot alternatives Jul 2 00:11:06.932946 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:11:06.932954 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:11:06.932960 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:11:06.932967 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:11:06.932974 kernel: Fallback order for Node 0: 0 Jul 2 00:11:06.932980 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 00:11:06.932987 kernel: Policy zone: DMA Jul 2 00:11:06.932993 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:11:06.933000 kernel: software IO TLB: area num 4. Jul 2 00:11:06.933008 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 00:11:06.933015 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jul 2 00:11:06.933022 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:11:06.933028 kernel: trace event string verifier disabled Jul 2 00:11:06.933035 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:11:06.933042 kernel: rcu: RCU event tracing is enabled. Jul 2 00:11:06.933049 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:11:06.933056 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:11:06.933063 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:11:06.933070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:11:06.933076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:11:06.933083 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:11:06.933091 kernel: GICv3: 256 SPIs implemented Jul 2 00:11:06.933098 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:11:06.933104 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:11:06.933111 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 00:11:06.933117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 00:11:06.933124 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 00:11:06.933130 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:11:06.933137 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:11:06.933144 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 00:11:06.933151 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 00:11:06.933157 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:11:06.933165 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:11:06.933172 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:11:06.933179 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:11:06.933185 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:11:06.933192 kernel: arm-pv: using stolen time PV Jul 2 00:11:06.933199 kernel: Console: colour dummy device 80x25 Jul 2 00:11:06.933206 kernel: ACPI: Core revision 20230628 Jul 2 00:11:06.933213 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:11:06.933220 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:11:06.933227 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:11:06.933235 kernel: SELinux: Initializing. Jul 2 00:11:06.933242 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:11:06.933249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:11:06.933255 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:11:06.933262 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:11:06.933269 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:11:06.933276 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:11:06.933283 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 00:11:06.933290 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 00:11:06.933298 kernel: Remapping and enabling EFI services. Jul 2 00:11:06.933305 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:11:06.933312 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:11:06.933318 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 00:11:06.933325 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 00:11:06.933332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:11:06.933339 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:11:06.933346 kernel: Detected PIPT I-cache on CPU2 Jul 2 00:11:06.933353 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 00:11:06.933360 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 00:11:06.933369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:11:06.933376 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 00:11:06.933388 kernel: Detected PIPT I-cache on CPU3 Jul 2 00:11:06.933397 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 00:11:06.933404 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 00:11:06.933411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:11:06.933418 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 00:11:06.933425 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:11:06.933433 kernel: SMP: Total of 4 processors activated. Jul 2 00:11:06.933442 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:11:06.933449 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:11:06.933457 kernel: CPU features: detected: Common not Private translations Jul 2 00:11:06.933464 kernel: CPU features: detected: CRC32 instructions Jul 2 00:11:06.933472 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 00:11:06.933479 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:11:06.933486 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:11:06.933502 kernel: CPU features: detected: Privileged Access Never Jul 2 00:11:06.933512 kernel: CPU features: detected: RAS Extension Support Jul 2 00:11:06.933519 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 00:11:06.933526 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:11:06.933534 kernel: alternatives: applying system-wide alternatives Jul 2 00:11:06.933541 kernel: devtmpfs: initialized Jul 2 00:11:06.933548 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:11:06.933646 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:11:06.933654 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:11:06.933661 kernel: SMBIOS 3.0.0 present. Jul 2 00:11:06.933671 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 00:11:06.933679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:11:06.933686 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:11:06.933693 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:11:06.933700 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:11:06.933708 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:11:06.933715 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 2 00:11:06.933722 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:11:06.933730 kernel: cpuidle: using governor menu Jul 2 00:11:06.933738 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:11:06.933746 kernel: ASID allocator initialised with 32768 entries Jul 2 00:11:06.933753 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:11:06.933760 kernel: Serial: AMBA PL011 UART driver Jul 2 00:11:06.933767 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 00:11:06.933774 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 00:11:06.933781 kernel: Modules: 509120 pages in range for PLT usage Jul 2 00:11:06.933789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:11:06.933796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:11:06.933805 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:11:06.933813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:11:06.933820 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:11:06.933828 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:11:06.933835 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:11:06.933842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:11:06.933849 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:11:06.933857 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:11:06.933864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:11:06.933874 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:11:06.933881 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:11:06.933888 kernel: ACPI: Interpreter enabled Jul 2 00:11:06.933896 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:11:06.933903 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:11:06.933910 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:11:06.933918 kernel: printk: console [ttyAMA0] enabled Jul 2 00:11:06.933926 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:11:06.934077 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:11:06.934155 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:11:06.934223 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:11:06.934287 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 00:11:06.934351 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 00:11:06.934360 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 00:11:06.934368 kernel: PCI host bridge to bus 0000:00 Jul 2 00:11:06.934439 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 00:11:06.934515 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:11:06.934594 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 00:11:06.934653 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:11:06.934733 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 00:11:06.934810 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:11:06.934885 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 00:11:06.934959 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 00:11:06.935025 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:11:06.935089 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:11:06.935155 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 00:11:06.935236 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 00:11:06.935297 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 00:11:06.935356 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:11:06.935420 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 00:11:06.935429 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:11:06.935437 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:11:06.935444 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:11:06.935451 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:11:06.935459 kernel: iommu: Default domain type: Translated Jul 2 00:11:06.935466 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:11:06.935473 kernel: efivars: Registered efivars operations Jul 2 00:11:06.935481 kernel: vgaarb: loaded Jul 2 00:11:06.935498 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:11:06.935506 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:11:06.935514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:11:06.935522 kernel: pnp: PnP ACPI init Jul 2 00:11:06.935610 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 00:11:06.935625 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:11:06.935635 kernel: NET: Registered PF_INET protocol family Jul 2 00:11:06.935645 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:11:06.935656 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:11:06.935663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:11:06.935671 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:11:06.935678 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:11:06.935686 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:11:06.935695 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:11:06.935703 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:11:06.935710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:11:06.935718 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:11:06.935728 kernel: kvm [1]: HYP mode not available Jul 2 00:11:06.935735 kernel: Initialise system trusted keyrings Jul 2 00:11:06.935745 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:11:06.935754 kernel: Key type asymmetric registered Jul 2 00:11:06.935764 kernel: Asymmetric key parser 'x509' registered Jul 2 00:11:06.935771 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 00:11:06.935779 kernel: io scheduler mq-deadline registered Jul 2 00:11:06.935786 kernel: io scheduler kyber registered Jul 2 00:11:06.935793 kernel: io scheduler bfq registered Jul 2 00:11:06.935802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:11:06.935809 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:11:06.935816 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:11:06.935886 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 00:11:06.935897 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:11:06.935904 kernel: thunder_xcv, ver 1.0 Jul 2 00:11:06.935911 kernel: thunder_bgx, ver 1.0 Jul 2 00:11:06.935919 kernel: nicpf, ver 1.0 Jul 2 00:11:06.935926 kernel: nicvf, ver 1.0 Jul 2 00:11:06.936005 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:11:06.936072 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:11:06 UTC (1719879066) Jul 2 00:11:06.936083 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:11:06.936090 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 00:11:06.936098 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 00:11:06.936105 kernel: watchdog: Hard watchdog permanently disabled Jul 2 00:11:06.936113 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:11:06.936120 kernel: Segment Routing with IPv6 Jul 2 00:11:06.936130 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:11:06.936137 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:11:06.936145 kernel: Key type dns_resolver registered Jul 2 00:11:06.936152 kernel: registered taskstats version 1 Jul 2 00:11:06.936160 kernel: Loading compiled-in X.509 certificates Jul 2 00:11:06.936167 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 00:11:06.936175 kernel: Key type .fscrypt registered Jul 2 00:11:06.936182 kernel: Key type fscrypt-provisioning registered Jul 2 00:11:06.936190 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:11:06.936199 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:11:06.936206 kernel: ima: No architecture policies found Jul 2 00:11:06.936214 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:11:06.936221 kernel: clk: Disabling unused clocks Jul 2 00:11:06.936229 kernel: Freeing unused kernel memory: 39040K Jul 2 00:11:06.936236 kernel: Run /init as init process Jul 2 00:11:06.936243 kernel: with arguments: Jul 2 00:11:06.936251 kernel: /init Jul 2 00:11:06.936258 kernel: with environment: Jul 2 00:11:06.936267 kernel: HOME=/ Jul 2 00:11:06.936275 kernel: TERM=linux Jul 2 00:11:06.936282 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:11:06.936291 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:11:06.936301 systemd[1]: Detected virtualization kvm. Jul 2 00:11:06.936310 systemd[1]: Detected architecture arm64. Jul 2 00:11:06.936317 systemd[1]: Running in initrd. Jul 2 00:11:06.936327 systemd[1]: No hostname configured, using default hostname. Jul 2 00:11:06.936334 systemd[1]: Hostname set to . Jul 2 00:11:06.936343 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:11:06.936351 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:11:06.936358 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:11:06.936367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:11:06.936377 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:11:06.936386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:11:06.936396 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:11:06.936404 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:11:06.936414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:11:06.936422 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:11:06.936430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:11:06.936438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:11:06.936446 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:11:06.936455 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:11:06.936463 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:11:06.936472 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:11:06.936480 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:11:06.936495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:11:06.936504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:11:06.936513 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:11:06.936521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:11:06.936529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:11:06.936539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:11:06.936548 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:11:06.936580 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:11:06.936589 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:11:06.936597 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:11:06.936605 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:11:06.936613 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:11:06.936621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:11:06.936632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:06.936640 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:11:06.936648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:11:06.936656 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:11:06.936665 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:11:06.936675 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:11:06.936704 systemd-journald[237]: Collecting audit messages is disabled. Jul 2 00:11:06.936725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:06.936733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:11:06.936744 systemd-journald[237]: Journal started Jul 2 00:11:06.936764 systemd-journald[237]: Runtime Journal (/run/log/journal/7637ed160715488c8da4ade17af6961b) is 5.9M, max 47.3M, 41.4M free. Jul 2 00:11:06.921288 systemd-modules-load[238]: Inserted module 'overlay' Jul 2 00:11:06.939571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:11:06.939602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:11:06.942107 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:11:06.942759 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 2 00:11:06.943594 kernel: Bridge firewalling registered Jul 2 00:11:06.943877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:11:06.947159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:11:06.949799 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:11:06.951025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:11:06.963621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:11:06.965728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:06.968148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:11:06.972069 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:11:06.974290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:11:06.992121 dracut-cmdline[275]: dracut-dracut-053 Jul 2 00:11:06.995620 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:11:07.004517 systemd-resolved[276]: Positive Trust Anchors: Jul 2 00:11:07.004539 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:11:07.004582 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:11:07.010109 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 2 00:11:07.011287 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:11:07.014610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:11:07.075595 kernel: SCSI subsystem initialized Jul 2 00:11:07.080584 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:11:07.087580 kernel: iscsi: registered transport (tcp) Jul 2 00:11:07.102849 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:11:07.102913 kernel: QLogic iSCSI HBA Driver Jul 2 00:11:07.149841 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:11:07.160745 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:11:07.179058 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:11:07.179122 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:11:07.179137 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:11:07.227587 kernel: raid6: neonx8 gen() 15761 MB/s Jul 2 00:11:07.244572 kernel: raid6: neonx4 gen() 15654 MB/s Jul 2 00:11:07.261574 kernel: raid6: neonx2 gen() 13205 MB/s Jul 2 00:11:07.278573 kernel: raid6: neonx1 gen() 10470 MB/s Jul 2 00:11:07.295579 kernel: raid6: int64x8 gen() 6958 MB/s Jul 2 00:11:07.312572 kernel: raid6: int64x4 gen() 7322 MB/s Jul 2 00:11:07.329574 kernel: raid6: int64x2 gen() 6124 MB/s Jul 2 00:11:07.346577 kernel: raid6: int64x1 gen() 5055 MB/s Jul 2 00:11:07.346624 kernel: raid6: using algorithm neonx8 gen() 15761 MB/s Jul 2 00:11:07.363577 kernel: raid6: .... xor() 11918 MB/s, rmw enabled Jul 2 00:11:07.363624 kernel: raid6: using neon recovery algorithm Jul 2 00:11:07.368576 kernel: xor: measuring software checksum speed Jul 2 00:11:07.369573 kernel: 8regs : 19854 MB/sec Jul 2 00:11:07.370939 kernel: 32regs : 19682 MB/sec Jul 2 00:11:07.370954 kernel: arm64_neon : 27116 MB/sec Jul 2 00:11:07.370964 kernel: xor: using function: arm64_neon (27116 MB/sec) Jul 2 00:11:07.428586 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:11:07.443641 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:11:07.457785 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:11:07.473963 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 2 00:11:07.477361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:11:07.484974 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:11:07.501282 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 2 00:11:07.539225 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:11:07.548976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:11:07.611605 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:11:07.621784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:11:07.635598 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:11:07.637282 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:11:07.638785 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:11:07.641793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:11:07.652815 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:11:07.663646 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 00:11:07.683663 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:11:07.683779 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:11:07.683790 kernel: GPT:9289727 != 19775487 Jul 2 00:11:07.683807 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:11:07.683817 kernel: GPT:9289727 != 19775487 Jul 2 00:11:07.683826 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:11:07.683837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:07.666685 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:11:07.686537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:11:07.686761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:07.689897 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:11:07.692110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:11:07.692258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:07.694072 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:07.703052 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (525) Jul 2 00:11:07.703075 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Jul 2 00:11:07.701847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:07.715569 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:11:07.720647 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:11:07.721769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:07.732682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:11:07.736306 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:11:07.737253 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:11:07.755770 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:11:07.757962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:11:07.765212 disk-uuid[555]: Primary Header is updated. Jul 2 00:11:07.765212 disk-uuid[555]: Secondary Entries is updated. Jul 2 00:11:07.765212 disk-uuid[555]: Secondary Header is updated. Jul 2 00:11:07.771591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:07.786728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:08.783583 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:08.783982 disk-uuid[556]: The operation has completed successfully. Jul 2 00:11:08.819215 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:11:08.819330 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:11:08.839729 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:11:08.842751 sh[577]: Success Jul 2 00:11:08.856588 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:11:08.894009 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:11:08.904042 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:11:08.905675 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:11:08.919086 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 2 00:11:08.919141 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:11:08.919161 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:11:08.919858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:11:08.920873 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:11:08.924431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:11:08.925724 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:11:08.926512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:11:08.928523 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:11:08.942528 kernel: BTRFS info (device vda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:11:08.942608 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:11:08.942620 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:11:08.945610 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:11:08.955510 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:11:08.956968 kernel: BTRFS info (device vda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:11:08.966547 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:11:08.971758 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:11:09.044505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:11:09.055758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:11:09.089692 systemd-networkd[764]: lo: Link UP Jul 2 00:11:09.089700 systemd-networkd[764]: lo: Gained carrier Jul 2 00:11:09.090379 systemd-networkd[764]: Enumeration completed Jul 2 00:11:09.091089 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:11:09.091092 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:11:09.093820 systemd-networkd[764]: eth0: Link UP Jul 2 00:11:09.093824 systemd-networkd[764]: eth0: Gained carrier Jul 2 00:11:09.093834 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:11:09.095049 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:11:09.102551 ignition[679]: Ignition 2.18.0 Jul 2 00:11:09.096220 systemd[1]: Reached target network.target - Network. Jul 2 00:11:09.102579 ignition[679]: Stage: fetch-offline Jul 2 00:11:09.102618 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:09.102626 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:09.102714 ignition[679]: parsed url from cmdline: "" Jul 2 00:11:09.102718 ignition[679]: no config URL provided Jul 2 00:11:09.102722 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:11:09.102729 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:11:09.102755 ignition[679]: op(1): [started] loading QEMU firmware config module Jul 2 00:11:09.102760 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:11:09.121767 ignition[679]: op(1): [finished] loading QEMU firmware config module Jul 2 00:11:09.134614 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:11:09.166352 ignition[679]: parsing config with SHA512: 7027bd88c655b9522b5cbdfbae39ebd53f8bbc234fe19d4de49f1aa234069a88fb806ff4494bb6ee4e3c104258969d965ef246df9de1ef654b497da2d5314d02 Jul 2 00:11:09.170647 unknown[679]: fetched base config from "system" Jul 2 00:11:09.170656 unknown[679]: fetched user config from "qemu" Jul 2 00:11:09.172057 ignition[679]: fetch-offline: fetch-offline passed Jul 2 00:11:09.172136 ignition[679]: Ignition finished successfully Jul 2 00:11:09.173617 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:11:09.174866 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:11:09.184771 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:11:09.196427 ignition[776]: Ignition 2.18.0 Jul 2 00:11:09.196438 ignition[776]: Stage: kargs Jul 2 00:11:09.196677 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:09.196689 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:09.198217 ignition[776]: kargs: kargs passed Jul 2 00:11:09.198280 ignition[776]: Ignition finished successfully Jul 2 00:11:09.200869 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:11:09.212771 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:11:09.222762 ignition[785]: Ignition 2.18.0 Jul 2 00:11:09.222772 ignition[785]: Stage: disks Jul 2 00:11:09.222939 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:09.222948 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:09.225477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:11:09.223979 ignition[785]: disks: disks passed Jul 2 00:11:09.227069 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:11:09.224032 ignition[785]: Ignition finished successfully Jul 2 00:11:09.228533 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:11:09.230044 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:11:09.231850 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:11:09.233288 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:11:09.240717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:11:09.253209 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:11:09.261595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:11:09.269721 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:11:09.324580 kernel: EXT4-fs (vda9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 2 00:11:09.324993 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:11:09.326048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:11:09.336657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:11:09.338313 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:11:09.339125 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:11:09.339168 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:11:09.339191 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:11:09.346898 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Jul 2 00:11:09.345149 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:11:09.346684 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:11:09.351608 kernel: BTRFS info (device vda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:11:09.351633 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:11:09.351643 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:11:09.353580 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:11:09.355450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:11:09.397331 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:11:09.401856 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:11:09.406368 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:11:09.410707 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:11:09.508692 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:11:09.519726 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:11:09.522858 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:11:09.533576 kernel: BTRFS info (device vda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:11:09.562310 ignition[917]: INFO : Ignition 2.18.0 Jul 2 00:11:09.562310 ignition[917]: INFO : Stage: mount Jul 2 00:11:09.562310 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:09.562310 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:09.562310 ignition[917]: INFO : mount: mount passed Jul 2 00:11:09.562310 ignition[917]: INFO : Ignition finished successfully Jul 2 00:11:09.561037 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:11:09.563417 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:11:09.573729 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:11:09.917031 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:11:09.931825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:11:09.940590 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jul 2 00:11:09.943045 kernel: BTRFS info (device vda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:11:09.943075 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:11:09.943086 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:11:09.945576 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:11:09.946390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:11:09.972251 ignition[948]: INFO : Ignition 2.18.0 Jul 2 00:11:09.972251 ignition[948]: INFO : Stage: files Jul 2 00:11:09.973666 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:09.973666 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:09.973666 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:11:09.976166 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:11:09.976166 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:11:09.978700 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:11:09.978700 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:11:09.981137 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:11:09.981137 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:11:09.981137 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:11:09.981137 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:11:09.981137 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:11:09.978867 unknown[948]: wrote ssh authorized keys file for user: core Jul 2 00:11:10.024877 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:11:10.068846 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:11:10.068846 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:11:10.072327 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 00:11:10.230671 systemd-networkd[764]: eth0: Gained IPv6LL Jul 2 00:11:10.373043 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 00:11:10.504535 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:11:10.504535 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:11:10.507905 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 00:11:10.671324 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 00:11:11.000205 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:11:11.000205 ignition[948]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 2 00:11:11.003470 ignition[948]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:11:11.038824 ignition[948]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:11:11.043356 ignition[948]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:11:11.045725 ignition[948]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:11:11.045725 ignition[948]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:11:11.045725 ignition[948]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:11:11.045725 ignition[948]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:11:11.045725 ignition[948]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:11:11.045725 ignition[948]: INFO : files: files passed Jul 2 00:11:11.045725 ignition[948]: INFO : Ignition finished successfully Jul 2 00:11:11.047606 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:11:11.056731 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:11:11.059044 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:11:11.061427 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:11:11.061579 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:11:11.066976 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:11:11.069292 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:11:11.069292 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:11:11.074960 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:11:11.071082 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:11:11.072756 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:11:11.078178 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:11:11.103019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:11:11.103132 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:11:11.105923 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:11:11.106678 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:11:11.108397 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:11:11.109237 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:11:11.125612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:11:11.142948 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:11:11.150978 systemd[1]: Stopped target network.target - Network. Jul 2 00:11:11.151872 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:11:11.153327 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:11:11.155207 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:11:11.156779 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:11:11.156911 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:11:11.159080 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:11:11.160652 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:11:11.162066 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:11:11.163607 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:11:11.165308 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:11:11.167155 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:11:11.168655 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:11:11.170324 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:11:11.172014 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:11:11.173454 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:11:11.174896 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:11:11.175033 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:11:11.177101 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:11:11.178820 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:11:11.180662 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:11:11.180764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:11:11.182694 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:11:11.182837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:11:11.185441 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:11:11.185579 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:11:11.187297 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:11:11.188581 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:11:11.193679 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:11:11.194863 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:11:11.196771 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:11:11.198212 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:11:11.198308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:11:11.199727 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:11:11.199812 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:11:11.201294 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:11:11.201410 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:11:11.202982 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:11:11.203083 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:11:11.216767 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:11:11.217574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:11:11.217708 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:11:11.222633 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:11:11.223504 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:11:11.226864 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:11:11.229206 ignition[1004]: INFO : Ignition 2.18.0 Jul 2 00:11:11.229206 ignition[1004]: INFO : Stage: umount Jul 2 00:11:11.229206 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:11.229206 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:11.229206 ignition[1004]: INFO : umount: umount passed Jul 2 00:11:11.229206 ignition[1004]: INFO : Ignition finished successfully Jul 2 00:11:11.228260 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:11:11.228392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:11:11.230327 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:11:11.230427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:11:11.240623 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 2 00:11:11.241177 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:11:11.242048 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:11:11.245379 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:11:11.245791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:11:11.247760 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:11:11.247845 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:11:11.252364 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:11:11.252883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:11:11.252968 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:11:11.254922 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:11:11.254977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:11:11.256633 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:11:11.256689 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:11:11.258399 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:11:11.258445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:11:11.259845 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:11:11.259886 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:11:11.261457 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:11:11.261514 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:11:11.273673 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:11:11.274337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:11:11.274398 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:11:11.276472 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:11:11.276535 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:11:11.278103 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:11:11.278149 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:11:11.279804 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:11:11.279842 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:11:11.281472 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:11:11.283730 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:11:11.283846 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:11:11.287310 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:11:11.287392 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:11:11.293017 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:11:11.293130 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:11:11.299303 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:11:11.299455 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:11:11.301573 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:11:11.301632 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:11:11.302972 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:11:11.303003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:11:11.304444 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:11:11.304499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:11:11.306786 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:11:11.306831 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:11:11.309087 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:11:11.309135 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:11.318793 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:11:11.319548 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:11:11.319640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:11:11.321402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:11:11.321443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:11.324042 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:11:11.324153 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:11:11.325940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:11:11.328139 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:11:11.338784 systemd[1]: Switching root. Jul 2 00:11:11.364245 systemd-journald[237]: Journal stopped Jul 2 00:11:12.283528 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 2 00:11:12.283591 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:11:12.283604 kernel: SELinux: policy capability open_perms=1 Jul 2 00:11:12.283614 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:11:12.283624 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:11:12.283633 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:11:12.283646 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:11:12.283656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:11:12.283665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:11:12.283677 kernel: audit: type=1403 audit(1719879071.586:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:11:12.283687 systemd[1]: Successfully loaded SELinux policy in 34.768ms. Jul 2 00:11:12.283708 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.405ms. Jul 2 00:11:12.283720 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:11:12.283731 systemd[1]: Detected virtualization kvm. Jul 2 00:11:12.283742 systemd[1]: Detected architecture arm64. Jul 2 00:11:12.283754 systemd[1]: Detected first boot. Jul 2 00:11:12.283764 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:11:12.283811 zram_generator::config[1067]: No configuration found. Jul 2 00:11:12.283824 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:11:12.283834 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:11:12.283849 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:11:12.283860 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:11:12.283873 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:11:12.283888 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:11:12.283900 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:11:12.283916 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:11:12.283934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:11:12.283956 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:11:12.283966 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:11:12.283977 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:11:12.283999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:11:12.284012 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:11:12.284027 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:11:12.284038 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:11:12.284049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:11:12.284060 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 00:11:12.284071 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:11:12.284082 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:11:12.284092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:11:12.284103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:11:12.284115 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:11:12.284126 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:11:12.284138 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:11:12.284148 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:11:12.284158 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:11:12.284170 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:11:12.284180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:11:12.284191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:11:12.284204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:11:12.284217 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:11:12.284227 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:11:12.284238 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:11:12.284249 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:11:12.284259 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:11:12.284270 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:11:12.284281 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:11:12.284292 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:11:12.284305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:11:12.284315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:11:12.284419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:11:12.284437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:11:12.284448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:11:12.284459 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:11:12.284478 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:11:12.284491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:11:12.284503 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:11:12.284520 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:11:12.284542 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:11:12.284594 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:11:12.284615 kernel: fuse: init (API version 7.39) Jul 2 00:11:12.284627 kernel: loop: module loaded Jul 2 00:11:12.284637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:11:12.284650 kernel: ACPI: bus type drm_connector registered Jul 2 00:11:12.284660 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:11:12.284673 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:11:12.284683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:11:12.284719 systemd-journald[1141]: Collecting audit messages is disabled. Jul 2 00:11:12.284741 systemd-journald[1141]: Journal started Jul 2 00:11:12.284763 systemd-journald[1141]: Runtime Journal (/run/log/journal/7637ed160715488c8da4ade17af6961b) is 5.9M, max 47.3M, 41.4M free. Jul 2 00:11:12.287579 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:11:12.290424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:11:12.291628 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:11:12.292732 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:11:12.293676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:11:12.294726 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:11:12.295834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:11:12.297119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:11:12.298743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:11:12.300481 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:11:12.300671 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:11:12.302020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:11:12.302177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:11:12.303518 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:11:12.303762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:11:12.305081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:11:12.305242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:11:12.306807 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:11:12.306965 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:11:12.308374 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:11:12.308617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:11:12.309926 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:11:12.311721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:11:12.313283 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:11:12.324749 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:11:12.338688 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:11:12.341094 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:11:12.342258 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:11:12.345598 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:11:12.347717 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:11:12.348640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:11:12.349841 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:11:12.351066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:11:12.352734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:11:12.357955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:11:12.364185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:11:12.365795 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:11:12.369846 systemd-journald[1141]: Time spent on flushing to /var/log/journal/7637ed160715488c8da4ade17af6961b is 12.494ms for 848 entries. Jul 2 00:11:12.369846 systemd-journald[1141]: System Journal (/var/log/journal/7637ed160715488c8da4ade17af6961b) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:11:12.396793 systemd-journald[1141]: Received client request to flush runtime journal. Jul 2 00:11:12.367192 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:11:12.368750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:11:12.372280 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:11:12.376863 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:11:12.392489 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:11:12.395298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:11:12.402282 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 2 00:11:12.402302 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 2 00:11:12.403243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:11:12.410148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:11:12.419848 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:11:12.443828 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:11:12.461836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:11:12.475578 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jul 2 00:11:12.475597 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Jul 2 00:11:12.480356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:11:12.837463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:11:12.850788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:11:12.870241 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Jul 2 00:11:12.883388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:11:12.891767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:11:12.909721 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:11:12.920271 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 2 00:11:12.938624 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1239) Jul 2 00:11:12.959344 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:11:12.965596 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1246) Jul 2 00:11:12.984590 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:11:13.035873 systemd-networkd[1242]: lo: Link UP Jul 2 00:11:13.036205 systemd-networkd[1242]: lo: Gained carrier Jul 2 00:11:13.036969 systemd-networkd[1242]: Enumeration completed Jul 2 00:11:13.037517 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:11:13.037527 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:11:13.038228 systemd-networkd[1242]: eth0: Link UP Jul 2 00:11:13.038239 systemd-networkd[1242]: eth0: Gained carrier Jul 2 00:11:13.038251 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:11:13.044850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:13.046253 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:11:13.049048 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:11:13.056658 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:11:13.058103 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:11:13.061695 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:11:13.086248 lvm[1274]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:11:13.094870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:13.115568 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:11:13.117465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:11:13.126718 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:11:13.131849 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:11:13.164099 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:11:13.165414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:11:13.166576 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:11:13.166608 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:11:13.167453 systemd[1]: Reached target machines.target - Containers. Jul 2 00:11:13.169317 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:11:13.178759 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:11:13.181149 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:11:13.182264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:11:13.183314 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:11:13.187858 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:11:13.190227 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:11:13.192115 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:11:13.200074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:11:13.204704 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 00:11:13.204895 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:11:13.216032 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:11:13.216691 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:11:13.216903 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:11:13.245597 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 00:11:13.278597 kernel: loop2: detected capacity change from 0 to 113672 Jul 2 00:11:13.306585 kernel: loop3: detected capacity change from 0 to 193208 Jul 2 00:11:13.313589 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 00:11:13.319600 kernel: loop5: detected capacity change from 0 to 113672 Jul 2 00:11:13.322850 (sd-merge)[1302]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:11:13.323263 (sd-merge)[1302]: Merged extensions into '/usr'. Jul 2 00:11:13.329241 systemd[1]: Reloading requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:11:13.329261 systemd[1]: Reloading... Jul 2 00:11:13.373607 zram_generator::config[1331]: No configuration found. Jul 2 00:11:13.472328 ldconfig[1286]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:11:13.481328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:11:13.526990 systemd[1]: Reloading finished in 197 ms. Jul 2 00:11:13.542498 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:11:13.543728 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:11:13.555781 systemd[1]: Starting ensure-sysext.service... Jul 2 00:11:13.557897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:11:13.564172 systemd[1]: Reloading requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:11:13.564189 systemd[1]: Reloading... Jul 2 00:11:13.579005 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:11:13.579281 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:11:13.579975 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:11:13.580200 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jul 2 00:11:13.580246 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jul 2 00:11:13.583094 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:11:13.583110 systemd-tmpfiles[1375]: Skipping /boot Jul 2 00:11:13.590246 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:11:13.590265 systemd-tmpfiles[1375]: Skipping /boot Jul 2 00:11:13.617701 zram_generator::config[1399]: No configuration found. Jul 2 00:11:13.716066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:11:13.761906 systemd[1]: Reloading finished in 197 ms. Jul 2 00:11:13.776572 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:11:13.792180 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:11:13.794732 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:11:13.797347 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:11:13.800775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:11:13.803850 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:11:13.815030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:11:13.816378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:11:13.824507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:11:13.829867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:11:13.831133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:11:13.833778 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:11:13.837842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:11:13.838012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:11:13.839847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:11:13.840016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:11:13.841680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:11:13.841943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:11:13.851404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:11:13.856963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:11:13.861080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:11:13.864645 augenrules[1479]: No rules Jul 2 00:11:13.867389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:11:13.871917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:11:13.872964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:11:13.888303 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:11:13.891457 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:11:13.893541 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:11:13.894934 systemd-resolved[1448]: Positive Trust Anchors: Jul 2 00:11:13.894954 systemd-resolved[1448]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:11:13.894985 systemd-resolved[1448]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:11:13.895310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:11:13.895537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:11:13.897182 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:11:13.897354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:11:13.899277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:11:13.899443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:11:13.901613 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:11:13.901831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:11:13.903278 systemd-resolved[1448]: Defaulting to hostname 'linux'. Jul 2 00:11:13.903784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:11:13.909619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:11:13.911302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:11:13.917860 systemd[1]: Finished ensure-sysext.service. Jul 2 00:11:13.920869 systemd[1]: Reached target network.target - Network. Jul 2 00:11:13.921755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:11:13.922823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:11:13.922901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:11:13.936793 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:11:13.937872 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:11:13.980297 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:11:13.981847 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:11:13.981893 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:11:13.981901 systemd-timesyncd[1506]: Initial clock synchronization to Tue 2024-07-02 00:11:14.246579 UTC. Jul 2 00:11:13.983015 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:11:13.984097 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:11:13.985211 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:11:13.986260 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:11:13.986298 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:11:13.987114 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:11:13.988147 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:11:13.989201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:11:13.990227 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:11:13.991837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:11:13.994310 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:11:13.996693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:11:14.002711 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:11:14.003547 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:11:14.004280 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:11:14.005210 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:11:14.005262 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:11:14.005283 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:11:14.006639 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:11:14.008740 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:11:14.010726 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:11:14.015830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:11:14.016827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:11:14.020826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:11:14.023186 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:11:14.024433 jq[1512]: false Jul 2 00:11:14.033748 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:11:14.036496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:11:14.039423 extend-filesystems[1513]: Found loop3 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found loop4 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found loop5 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda1 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda2 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda3 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found usr Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda4 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda6 Jul 2 00:11:14.039423 extend-filesystems[1513]: Found vda7 Jul 2 00:11:14.042527 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:11:14.058474 extend-filesystems[1513]: Found vda9 Jul 2 00:11:14.058474 extend-filesystems[1513]: Checking size of /dev/vda9 Jul 2 00:11:14.045952 dbus-daemon[1511]: [system] SELinux support is enabled Jul 2 00:11:14.058873 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:11:14.061602 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:11:14.066735 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:11:14.074152 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:11:14.075708 jq[1536]: true Jul 2 00:11:14.082662 extend-filesystems[1513]: Resized partition /dev/vda9 Jul 2 00:11:14.079690 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:11:14.079948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:11:14.080371 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:11:14.080622 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:11:14.085426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:11:14.086866 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:11:14.091910 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1252) Jul 2 00:11:14.098100 extend-filesystems[1541]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:11:14.112670 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:11:14.113274 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:11:14.123334 jq[1544]: true Jul 2 00:11:14.139139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:11:14.139184 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:11:14.140516 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:11:14.140546 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:11:14.147710 tar[1542]: linux-arm64/helm Jul 2 00:11:14.150838 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:11:14.151054 systemd-logind[1524]: New seat seat0. Jul 2 00:11:14.152757 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:11:14.171633 update_engine[1535]: I0702 00:11:14.171283 1535 main.cc:92] Flatcar Update Engine starting Jul 2 00:11:14.178173 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:11:14.180540 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:11:14.184619 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:11:14.184713 update_engine[1535]: I0702 00:11:14.182978 1535 update_check_scheduler.cc:74] Next update check in 7m25s Jul 2 00:11:14.191202 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:11:14.211531 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:11:14.211531 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:11:14.211531 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:11:14.221664 extend-filesystems[1513]: Resized filesystem in /dev/vda9 Jul 2 00:11:14.214264 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:11:14.214557 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:11:14.223549 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:11:14.228248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:11:14.233317 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:11:14.261663 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:11:14.363583 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:11:14.371832 containerd[1546]: time="2024-07-02T00:11:14.371723392Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:11:14.388895 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:11:14.390746 systemd-networkd[1242]: eth0: Gained IPv6LL Jul 2 00:11:14.402523 containerd[1546]: time="2024-07-02T00:11:14.402462637Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:11:14.403192 containerd[1546]: time="2024-07-02T00:11:14.402680911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.404185 containerd[1546]: time="2024-07-02T00:11:14.404074237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:11:14.404185 containerd[1546]: time="2024-07-02T00:11:14.404167960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.404495 containerd[1546]: time="2024-07-02T00:11:14.404468965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:11:14.404495 containerd[1546]: time="2024-07-02T00:11:14.404494090Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.404571573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.404640047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.404653312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.404776665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.404993492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.405014155Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.405027420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.405174699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.405189286Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.405244495Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:11:14.405686 containerd[1546]: time="2024-07-02T00:11:14.405256231Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:11:14.404974 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:11:14.406823 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:11:14.410059 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:11:14.413075 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:11:14.418859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.419879148Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.419956506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.419972829Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420012418Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420028617Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420043452Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420056800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420235527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420254825Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420273380Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420353507Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420372971Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420391897Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422427 containerd[1546]: time="2024-07-02T00:11:14.420406196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.420418262Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.420435040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.420449586Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.420463016Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.420475785Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.420609386Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421035893Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421080399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421095234Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421123624Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421244084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421257638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421271936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.422832 containerd[1546]: time="2024-07-02T00:11:14.421285325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421299045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421312971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421326236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421344253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421357518Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421513930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421533848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421546204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421558726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421573313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421605752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421620175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423188 containerd[1546]: time="2024-07-02T00:11:14.421631663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:11:14.423494 containerd[1546]: time="2024-07-02T00:11:14.422040234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:11:14.423494 containerd[1546]: time="2024-07-02T00:11:14.422105485Z" level=info msg="Connect containerd service" Jul 2 00:11:14.423494 containerd[1546]: time="2024-07-02T00:11:14.422148545Z" level=info msg="using legacy CRI server" Jul 2 00:11:14.423494 containerd[1546]: time="2024-07-02T00:11:14.422156851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:11:14.423494 containerd[1546]: time="2024-07-02T00:11:14.422313304Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.425751720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.425814326Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.425854576Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.425867262Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.425880279Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.426415468Z" level=info msg="Start subscribing containerd event" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.426537168Z" level=info msg="Start recovering state" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.426618122Z" level=info msg="Start event monitor" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.426639569Z" level=info msg="Start snapshots syncer" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.426650396Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:11:14.426821 containerd[1546]: time="2024-07-02T00:11:14.426657504Z" level=info msg="Start streaming server" Jul 2 00:11:14.423949 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:11:14.425565 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:11:14.425846 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:11:14.427223 containerd[1546]: time="2024-07-02T00:11:14.427129839Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:11:14.427223 containerd[1546]: time="2024-07-02T00:11:14.427201164Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:11:14.431883 containerd[1546]: time="2024-07-02T00:11:14.430091539Z" level=info msg="containerd successfully booted in 0.060038s" Jul 2 00:11:14.431056 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:11:14.448664 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:11:14.461873 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:11:14.464942 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:11:14.465219 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:11:14.467848 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:11:14.474552 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:11:14.485999 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:11:14.488662 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 00:11:14.489824 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:11:14.585063 tar[1542]: linux-arm64/LICENSE Jul 2 00:11:14.585171 tar[1542]: linux-arm64/README.md Jul 2 00:11:14.596754 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:11:15.071119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:15.072779 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:11:15.076284 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:11:15.077753 systemd[1]: Startup finished in 5.448s (kernel) + 3.526s (userspace) = 8.974s. Jul 2 00:11:15.815611 kubelet[1650]: E0702 00:11:15.815502 1650 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:11:15.818814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:11:15.819014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:11:19.625877 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:11:19.633870 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:44082.service - OpenSSH per-connection server daemon (10.0.0.1:44082). Jul 2 00:11:19.701322 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 44082 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:19.703669 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:19.713966 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:11:19.715860 systemd-logind[1524]: New session 1 of user core. Jul 2 00:11:19.725897 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:11:19.736962 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:11:19.755988 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:11:19.759052 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:19.840906 systemd[1670]: Queued start job for default target default.target. Jul 2 00:11:19.841333 systemd[1670]: Created slice app.slice - User Application Slice. Jul 2 00:11:19.841374 systemd[1670]: Reached target paths.target - Paths. Jul 2 00:11:19.841385 systemd[1670]: Reached target timers.target - Timers. Jul 2 00:11:19.856702 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:11:19.863959 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:11:19.864043 systemd[1670]: Reached target sockets.target - Sockets. Jul 2 00:11:19.864061 systemd[1670]: Reached target basic.target - Basic System. Jul 2 00:11:19.864105 systemd[1670]: Reached target default.target - Main User Target. Jul 2 00:11:19.864134 systemd[1670]: Startup finished in 98ms. Jul 2 00:11:19.864469 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:11:19.866104 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:11:19.940897 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:44096.service - OpenSSH per-connection server daemon (10.0.0.1:44096). Jul 2 00:11:19.980470 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 44096 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:19.981942 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:19.989626 systemd-logind[1524]: New session 2 of user core. Jul 2 00:11:20.000915 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:11:20.056829 sshd[1682]: pam_unix(sshd:session): session closed for user core Jul 2 00:11:20.074906 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:44106.service - OpenSSH per-connection server daemon (10.0.0.1:44106). Jul 2 00:11:20.075311 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:44096.service: Deactivated successfully. Jul 2 00:11:20.077310 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:11:20.077884 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:11:20.079386 systemd-logind[1524]: Removed session 2. Jul 2 00:11:20.114179 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 44106 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:20.115624 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:20.119783 systemd-logind[1524]: New session 3 of user core. Jul 2 00:11:20.131926 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:11:20.181596 sshd[1687]: pam_unix(sshd:session): session closed for user core Jul 2 00:11:20.196899 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:45936.service - OpenSSH per-connection server daemon (10.0.0.1:45936). Jul 2 00:11:20.197341 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:44106.service: Deactivated successfully. Jul 2 00:11:20.200218 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:11:20.201512 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:11:20.203786 systemd-logind[1524]: Removed session 3. Jul 2 00:11:20.233385 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 45936 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:20.234813 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:20.240652 systemd-logind[1524]: New session 4 of user core. Jul 2 00:11:20.256893 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:11:20.311556 sshd[1695]: pam_unix(sshd:session): session closed for user core Jul 2 00:11:20.322915 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:45940.service - OpenSSH per-connection server daemon (10.0.0.1:45940). Jul 2 00:11:20.323475 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:45936.service: Deactivated successfully. Jul 2 00:11:20.326308 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:11:20.327292 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:11:20.328699 systemd-logind[1524]: Removed session 4. Jul 2 00:11:20.361833 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 45940 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:20.360438 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:20.369478 systemd-logind[1524]: New session 5 of user core. Jul 2 00:11:20.375884 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:11:20.457318 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:11:20.458087 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:11:20.476774 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 2 00:11:20.479278 sshd[1703]: pam_unix(sshd:session): session closed for user core Jul 2 00:11:20.489946 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:45956.service - OpenSSH per-connection server daemon (10.0.0.1:45956). Jul 2 00:11:20.490392 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:45940.service: Deactivated successfully. Jul 2 00:11:20.493410 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:11:20.495268 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:11:20.498368 systemd-logind[1524]: Removed session 5. Jul 2 00:11:20.530961 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 45956 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:20.532618 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:20.537003 systemd-logind[1524]: New session 6 of user core. Jul 2 00:11:20.550954 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:11:20.608134 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:11:20.608510 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:11:20.612477 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 2 00:11:20.618123 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:11:20.618393 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:11:20.635854 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:11:20.637875 auditctl[1723]: No rules Jul 2 00:11:20.638283 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:11:20.638529 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:11:20.642632 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:11:20.682287 augenrules[1742]: No rules Jul 2 00:11:20.683660 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:11:20.685082 sudo[1719]: pam_unix(sudo:session): session closed for user root Jul 2 00:11:20.687045 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 2 00:11:20.708028 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:45968.service - OpenSSH per-connection server daemon (10.0.0.1:45968). Jul 2 00:11:20.708527 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:45956.service: Deactivated successfully. Jul 2 00:11:20.710745 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:11:20.711602 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:11:20.713770 systemd-logind[1524]: Removed session 6. Jul 2 00:11:20.748143 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 45968 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:11:20.749692 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:11:20.755795 systemd-logind[1524]: New session 7 of user core. Jul 2 00:11:20.765953 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:11:20.823011 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:11:20.823277 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:11:20.951865 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:11:20.952142 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:11:21.219004 dockerd[1765]: time="2024-07-02T00:11:21.218928670Z" level=info msg="Starting up" Jul 2 00:11:21.432971 dockerd[1765]: time="2024-07-02T00:11:21.432859706Z" level=info msg="Loading containers: start." Jul 2 00:11:21.541605 kernel: Initializing XFRM netlink socket Jul 2 00:11:21.620052 systemd-networkd[1242]: docker0: Link UP Jul 2 00:11:21.631644 dockerd[1765]: time="2024-07-02T00:11:21.631420420Z" level=info msg="Loading containers: done." Jul 2 00:11:21.692166 dockerd[1765]: time="2024-07-02T00:11:21.692112748Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:11:21.692830 dockerd[1765]: time="2024-07-02T00:11:21.692322114Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:11:21.692830 dockerd[1765]: time="2024-07-02T00:11:21.692447159Z" level=info msg="Daemon has completed initialization" Jul 2 00:11:21.692502 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck334493087-merged.mount: Deactivated successfully. Jul 2 00:11:21.721674 dockerd[1765]: time="2024-07-02T00:11:21.720995026Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:11:21.721732 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:11:22.412248 containerd[1546]: time="2024-07-02T00:11:22.412181462Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:11:23.095028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065743435.mount: Deactivated successfully. Jul 2 00:11:24.079732 containerd[1546]: time="2024-07-02T00:11:24.079660151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:24.082459 containerd[1546]: time="2024-07-02T00:11:24.082406501Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671540" Jul 2 00:11:24.084385 containerd[1546]: time="2024-07-02T00:11:24.084331884Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:24.088045 containerd[1546]: time="2024-07-02T00:11:24.087997410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:24.089666 containerd[1546]: time="2024-07-02T00:11:24.089618324Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 1.67739328s" Jul 2 00:11:24.089720 containerd[1546]: time="2024-07-02T00:11:24.089670333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 00:11:24.110757 containerd[1546]: time="2024-07-02T00:11:24.110700053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:11:25.367400 containerd[1546]: time="2024-07-02T00:11:25.366867153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:25.368901 containerd[1546]: time="2024-07-02T00:11:25.368850189Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893120" Jul 2 00:11:25.370491 containerd[1546]: time="2024-07-02T00:11:25.370457827Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:25.374616 containerd[1546]: time="2024-07-02T00:11:25.374516722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:25.375692 containerd[1546]: time="2024-07-02T00:11:25.375654003Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.264889654s" Jul 2 00:11:25.375915 containerd[1546]: time="2024-07-02T00:11:25.375793135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 00:11:25.398495 containerd[1546]: time="2024-07-02T00:11:25.398308648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:11:26.071245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:11:26.086809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:11:26.190732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:26.196108 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:11:26.245991 kubelet[1992]: E0702 00:11:26.245885 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:11:26.251150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:11:26.251339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:11:26.501058 containerd[1546]: time="2024-07-02T00:11:26.500765968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:26.502224 containerd[1546]: time="2024-07-02T00:11:26.502159518Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358440" Jul 2 00:11:26.503220 containerd[1546]: time="2024-07-02T00:11:26.503148146Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:26.509051 containerd[1546]: time="2024-07-02T00:11:26.508995918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:26.510283 containerd[1546]: time="2024-07-02T00:11:26.510236535Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.111872239s" Jul 2 00:11:26.510283 containerd[1546]: time="2024-07-02T00:11:26.510279621Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 00:11:26.535203 containerd[1546]: time="2024-07-02T00:11:26.535110043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:11:27.602430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346258691.mount: Deactivated successfully. Jul 2 00:11:27.894259 containerd[1546]: time="2024-07-02T00:11:27.894114773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:27.895256 containerd[1546]: time="2024-07-02T00:11:27.894854463Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772463" Jul 2 00:11:27.896107 containerd[1546]: time="2024-07-02T00:11:27.896004453Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:27.899600 containerd[1546]: time="2024-07-02T00:11:27.898976532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:27.900478 containerd[1546]: time="2024-07-02T00:11:27.900424129Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.365268276s" Jul 2 00:11:27.900478 containerd[1546]: time="2024-07-02T00:11:27.900470799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 00:11:27.923631 containerd[1546]: time="2024-07-02T00:11:27.923590219Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:11:28.382108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790545919.mount: Deactivated successfully. Jul 2 00:11:28.386328 containerd[1546]: time="2024-07-02T00:11:28.386275526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:28.386848 containerd[1546]: time="2024-07-02T00:11:28.386813579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 00:11:28.387724 containerd[1546]: time="2024-07-02T00:11:28.387664099Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:28.390032 containerd[1546]: time="2024-07-02T00:11:28.389992525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:28.391029 containerd[1546]: time="2024-07-02T00:11:28.390994212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 467.360589ms" Jul 2 00:11:28.391029 containerd[1546]: time="2024-07-02T00:11:28.391031843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:11:28.411021 containerd[1546]: time="2024-07-02T00:11:28.410921100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:11:28.898450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949057712.mount: Deactivated successfully. Jul 2 00:11:30.386965 containerd[1546]: time="2024-07-02T00:11:30.386893130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:30.389082 containerd[1546]: time="2024-07-02T00:11:30.388991419Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 00:11:30.390265 containerd[1546]: time="2024-07-02T00:11:30.390214219Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:30.393635 containerd[1546]: time="2024-07-02T00:11:30.392982477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:30.395288 containerd[1546]: time="2024-07-02T00:11:30.395243439Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.984279213s" Jul 2 00:11:30.395288 containerd[1546]: time="2024-07-02T00:11:30.395286487Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:11:30.415657 containerd[1546]: time="2024-07-02T00:11:30.415552145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:11:30.977707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355523078.mount: Deactivated successfully. Jul 2 00:11:31.285482 containerd[1546]: time="2024-07-02T00:11:31.285325241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:31.286034 containerd[1546]: time="2024-07-02T00:11:31.285989786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Jul 2 00:11:31.287017 containerd[1546]: time="2024-07-02T00:11:31.286948173Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:31.289363 containerd[1546]: time="2024-07-02T00:11:31.289281927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:31.290308 containerd[1546]: time="2024-07-02T00:11:31.290271539Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 874.665148ms" Jul 2 00:11:31.290477 containerd[1546]: time="2024-07-02T00:11:31.290312760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 00:11:35.374776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:35.384832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:11:35.409013 systemd[1]: Reloading requested from client PID 2179 ('systemctl') (unit session-7.scope)... Jul 2 00:11:35.409035 systemd[1]: Reloading... Jul 2 00:11:35.487594 zram_generator::config[2220]: No configuration found. Jul 2 00:11:35.600421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:11:35.652223 systemd[1]: Reloading finished in 242 ms. Jul 2 00:11:35.702334 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:11:35.702411 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:11:35.702704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:35.705916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:11:35.803435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:35.808052 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:11:35.851598 kubelet[2273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:11:35.851598 kubelet[2273]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:11:35.851598 kubelet[2273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:11:35.851598 kubelet[2273]: I0702 00:11:35.850961 2273 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:11:36.488916 kubelet[2273]: I0702 00:11:36.488869 2273 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:11:36.488916 kubelet[2273]: I0702 00:11:36.488912 2273 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:11:36.489282 kubelet[2273]: I0702 00:11:36.489253 2273 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:11:36.605413 kubelet[2273]: I0702 00:11:36.605181 2273 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:11:36.607472 kubelet[2273]: E0702 00:11:36.607442 2273 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.614844 kubelet[2273]: W0702 00:11:36.614809 2273 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:11:36.615723 kubelet[2273]: I0702 00:11:36.615699 2273 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:11:36.616085 kubelet[2273]: I0702 00:11:36.616061 2273 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:11:36.616270 kubelet[2273]: I0702 00:11:36.616247 2273 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:11:36.616343 kubelet[2273]: I0702 00:11:36.616276 2273 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:11:36.616343 kubelet[2273]: I0702 00:11:36.616285 2273 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:11:36.616490 kubelet[2273]: I0702 00:11:36.616476 2273 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:11:36.618320 kubelet[2273]: W0702 00:11:36.618245 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.618320 kubelet[2273]: E0702 00:11:36.618310 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.618430 kubelet[2273]: I0702 00:11:36.618408 2273 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:11:36.618479 kubelet[2273]: I0702 00:11:36.618442 2273 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:11:36.618547 kubelet[2273]: I0702 00:11:36.618538 2273 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:11:36.618595 kubelet[2273]: I0702 00:11:36.618570 2273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:11:36.620849 kubelet[2273]: W0702 00:11:36.620807 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.620904 kubelet[2273]: E0702 00:11:36.620856 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.625674 kubelet[2273]: I0702 00:11:36.625647 2273 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:11:36.633717 kubelet[2273]: W0702 00:11:36.633690 2273 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:11:36.636141 kubelet[2273]: I0702 00:11:36.636114 2273 server.go:1232] "Started kubelet" Jul 2 00:11:36.636834 kubelet[2273]: I0702 00:11:36.636256 2273 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:11:36.636834 kubelet[2273]: I0702 00:11:36.636598 2273 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:11:36.636834 kubelet[2273]: I0702 00:11:36.636653 2273 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:11:36.638249 kubelet[2273]: I0702 00:11:36.638214 2273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:11:36.639170 kubelet[2273]: E0702 00:11:36.638722 2273 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:11:36.639170 kubelet[2273]: E0702 00:11:36.638753 2273 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:11:36.639894 kubelet[2273]: E0702 00:11:36.639758 2273 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3ced139d066c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 11, 36, 636081772, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 11, 36, 636081772, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.82:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.82:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:11:36.640387 kubelet[2273]: I0702 00:11:36.640363 2273 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:11:36.646258 kubelet[2273]: E0702 00:11:36.645677 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:36.646258 kubelet[2273]: I0702 00:11:36.645801 2273 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:11:36.646258 kubelet[2273]: I0702 00:11:36.645951 2273 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:11:36.646258 kubelet[2273]: I0702 00:11:36.646041 2273 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:11:36.646258 kubelet[2273]: E0702 00:11:36.646066 2273 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Jul 2 00:11:36.646694 kubelet[2273]: W0702 00:11:36.646457 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.646694 kubelet[2273]: E0702 00:11:36.646506 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.667474 kubelet[2273]: I0702 00:11:36.664416 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:11:36.667474 kubelet[2273]: I0702 00:11:36.665855 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:11:36.667474 kubelet[2273]: I0702 00:11:36.665886 2273 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:11:36.667474 kubelet[2273]: I0702 00:11:36.665906 2273 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:11:36.667474 kubelet[2273]: E0702 00:11:36.665982 2273 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:11:36.667474 kubelet[2273]: W0702 00:11:36.666893 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.667474 kubelet[2273]: E0702 00:11:36.666930 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:36.681830 kubelet[2273]: I0702 00:11:36.681767 2273 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:11:36.681830 kubelet[2273]: I0702 00:11:36.681805 2273 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:11:36.681830 kubelet[2273]: I0702 00:11:36.681826 2273 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:11:36.685008 kubelet[2273]: I0702 00:11:36.684972 2273 policy_none.go:49] "None policy: Start" Jul 2 00:11:36.685585 kubelet[2273]: I0702 00:11:36.685515 2273 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:11:36.685661 kubelet[2273]: I0702 00:11:36.685600 2273 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:11:36.692585 kubelet[2273]: I0702 00:11:36.691817 2273 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:11:36.692585 kubelet[2273]: I0702 00:11:36.692076 2273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:11:36.693140 kubelet[2273]: E0702 00:11:36.693106 2273 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:11:36.747651 kubelet[2273]: I0702 00:11:36.747504 2273 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:11:36.748003 kubelet[2273]: E0702 00:11:36.747933 2273 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 2 00:11:36.766189 kubelet[2273]: I0702 00:11:36.766077 2273 topology_manager.go:215] "Topology Admit Handler" podUID="f6d642fa2197c6c91511eae1502c6020" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:11:36.767469 kubelet[2273]: I0702 00:11:36.767338 2273 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:11:36.768574 kubelet[2273]: I0702 00:11:36.768523 2273 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:11:36.847334 kubelet[2273]: E0702 00:11:36.847295 2273 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Jul 2 00:11:36.948655 kubelet[2273]: I0702 00:11:36.948625 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:36.948975 kubelet[2273]: I0702 00:11:36.948673 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:11:36.948975 kubelet[2273]: I0702 00:11:36.948698 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6d642fa2197c6c91511eae1502c6020-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6d642fa2197c6c91511eae1502c6020\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:36.948975 kubelet[2273]: I0702 00:11:36.948717 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6d642fa2197c6c91511eae1502c6020-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6d642fa2197c6c91511eae1502c6020\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:36.948975 kubelet[2273]: I0702 00:11:36.948736 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:36.948975 kubelet[2273]: I0702 00:11:36.948756 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:36.949154 kubelet[2273]: I0702 00:11:36.948774 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:36.949154 kubelet[2273]: I0702 00:11:36.948795 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:36.949154 kubelet[2273]: I0702 00:11:36.948813 2273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6d642fa2197c6c91511eae1502c6020-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6d642fa2197c6c91511eae1502c6020\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:36.949481 kubelet[2273]: I0702 00:11:36.949449 2273 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:11:36.949799 kubelet[2273]: E0702 00:11:36.949785 2273 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 2 00:11:37.078910 kubelet[2273]: E0702 00:11:37.078870 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:37.078910 kubelet[2273]: E0702 00:11:37.078912 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:37.079043 kubelet[2273]: E0702 00:11:37.078878 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:37.079715 containerd[1546]: time="2024-07-02T00:11:37.079656157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6d642fa2197c6c91511eae1502c6020,Namespace:kube-system,Attempt:0,}" Jul 2 00:11:37.079994 containerd[1546]: time="2024-07-02T00:11:37.079675747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:11:37.079994 containerd[1546]: time="2024-07-02T00:11:37.079690209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:11:37.249301 kubelet[2273]: E0702 00:11:37.247986 2273 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Jul 2 00:11:37.351514 kubelet[2273]: I0702 00:11:37.351120 2273 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:11:37.351514 kubelet[2273]: E0702 00:11:37.351416 2273 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jul 2 00:11:37.489137 kubelet[2273]: W0702 00:11:37.489079 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:37.489275 kubelet[2273]: E0702 00:11:37.489264 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:37.570137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039876359.mount: Deactivated successfully. Jul 2 00:11:37.578057 containerd[1546]: time="2024-07-02T00:11:37.576859904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:11:37.580204 containerd[1546]: time="2024-07-02T00:11:37.578439324Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:11:37.581078 containerd[1546]: time="2024-07-02T00:11:37.581020760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 00:11:37.582059 containerd[1546]: time="2024-07-02T00:11:37.581986200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:11:37.582694 containerd[1546]: time="2024-07-02T00:11:37.582644769Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:11:37.585030 containerd[1546]: time="2024-07-02T00:11:37.584428503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:11:37.585030 containerd[1546]: time="2024-07-02T00:11:37.584597482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:11:37.589669 containerd[1546]: time="2024-07-02T00:11:37.587961677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:11:37.589669 containerd[1546]: time="2024-07-02T00:11:37.589440383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.578391ms" Jul 2 00:11:37.590695 containerd[1546]: time="2024-07-02T00:11:37.590116299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.156838ms" Jul 2 00:11:37.596139 containerd[1546]: time="2024-07-02T00:11:37.595815393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.054956ms" Jul 2 00:11:37.766525 kubelet[2273]: W0702 00:11:37.766383 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:37.766525 kubelet[2273]: E0702 00:11:37.766449 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:37.773295 containerd[1546]: time="2024-07-02T00:11:37.769629156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:11:37.773295 containerd[1546]: time="2024-07-02T00:11:37.773163853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:37.773295 containerd[1546]: time="2024-07-02T00:11:37.773208241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:11:37.773295 containerd[1546]: time="2024-07-02T00:11:37.773254913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:37.776393 containerd[1546]: time="2024-07-02T00:11:37.769887833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:11:37.776393 containerd[1546]: time="2024-07-02T00:11:37.773750953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:37.776393 containerd[1546]: time="2024-07-02T00:11:37.773778715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:11:37.776393 containerd[1546]: time="2024-07-02T00:11:37.773793778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:37.777214 kubelet[2273]: W0702 00:11:37.777071 2273 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:37.777214 kubelet[2273]: E0702 00:11:37.777138 2273 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jul 2 00:11:37.777896 containerd[1546]: time="2024-07-02T00:11:37.777806368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:11:37.777896 containerd[1546]: time="2024-07-02T00:11:37.777873871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:37.777896 containerd[1546]: time="2024-07-02T00:11:37.777887772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:11:37.778067 containerd[1546]: time="2024-07-02T00:11:37.777897948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:37.830213 containerd[1546]: time="2024-07-02T00:11:37.830152026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7de013b05a2a5a7373cc3bb58a61f9f471211c2aac3647a8a7efd7293fbdaa01\"" Jul 2 00:11:37.831575 kubelet[2273]: E0702 00:11:37.831245 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:37.834733 containerd[1546]: time="2024-07-02T00:11:37.834696950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6d642fa2197c6c91511eae1502c6020,Namespace:kube-system,Attempt:0,} returns sandbox id \"7df4ad77037052d307d48e7dd5e5ee0d3ff411f801ee4a69e8e5fa9840a6833a\"" Jul 2 00:11:37.835378 kubelet[2273]: E0702 00:11:37.835358 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:37.836004 containerd[1546]: time="2024-07-02T00:11:37.835906164Z" level=info msg="CreateContainer within sandbox \"7de013b05a2a5a7373cc3bb58a61f9f471211c2aac3647a8a7efd7293fbdaa01\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:11:37.837495 containerd[1546]: time="2024-07-02T00:11:37.837445523Z" level=info msg="CreateContainer within sandbox \"7df4ad77037052d307d48e7dd5e5ee0d3ff411f801ee4a69e8e5fa9840a6833a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:11:37.839223 containerd[1546]: time="2024-07-02T00:11:37.839195484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4420cdd45072f223e2dee0222d601bfa283c11c5093ef600738dc9862d7b268d\"" Jul 2 00:11:37.840665 kubelet[2273]: E0702 00:11:37.840639 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:37.843890 containerd[1546]: time="2024-07-02T00:11:37.843856948Z" level=info msg="CreateContainer within sandbox \"4420cdd45072f223e2dee0222d601bfa283c11c5093ef600738dc9862d7b268d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:11:37.855908 containerd[1546]: time="2024-07-02T00:11:37.855854694Z" level=info msg="CreateContainer within sandbox \"7de013b05a2a5a7373cc3bb58a61f9f471211c2aac3647a8a7efd7293fbdaa01\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d030d4f43982fc487363d57b0f6f0ee7ac6b7c3c68f15be45ae35eba7ee55ec8\"" Jul 2 00:11:37.856603 containerd[1546]: time="2024-07-02T00:11:37.856396845Z" level=info msg="StartContainer for \"d030d4f43982fc487363d57b0f6f0ee7ac6b7c3c68f15be45ae35eba7ee55ec8\"" Jul 2 00:11:37.857933 containerd[1546]: time="2024-07-02T00:11:37.857696557Z" level=info msg="CreateContainer within sandbox \"7df4ad77037052d307d48e7dd5e5ee0d3ff411f801ee4a69e8e5fa9840a6833a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de6ba6279dc88d830c709b05056a9f915416f2e7ebcffee77ddbd45a52ad9b7b\"" Jul 2 00:11:37.859577 containerd[1546]: time="2024-07-02T00:11:37.859467430Z" level=info msg="StartContainer for \"de6ba6279dc88d830c709b05056a9f915416f2e7ebcffee77ddbd45a52ad9b7b\"" Jul 2 00:11:37.864379 containerd[1546]: time="2024-07-02T00:11:37.864267987Z" level=info msg="CreateContainer within sandbox \"4420cdd45072f223e2dee0222d601bfa283c11c5093ef600738dc9862d7b268d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30ee20d724fce70670d8a9a9d617187cd31095d9cc9af8b9d2dcdde36a1912a9\"" Jul 2 00:11:37.865751 containerd[1546]: time="2024-07-02T00:11:37.864993018Z" level=info msg="StartContainer for \"30ee20d724fce70670d8a9a9d617187cd31095d9cc9af8b9d2dcdde36a1912a9\"" Jul 2 00:11:37.917492 containerd[1546]: time="2024-07-02T00:11:37.917431018Z" level=info msg="StartContainer for \"d030d4f43982fc487363d57b0f6f0ee7ac6b7c3c68f15be45ae35eba7ee55ec8\" returns successfully" Jul 2 00:11:37.927644 containerd[1546]: time="2024-07-02T00:11:37.927601724Z" level=info msg="StartContainer for \"de6ba6279dc88d830c709b05056a9f915416f2e7ebcffee77ddbd45a52ad9b7b\" returns successfully" Jul 2 00:11:37.970418 containerd[1546]: time="2024-07-02T00:11:37.970259335Z" level=info msg="StartContainer for \"30ee20d724fce70670d8a9a9d617187cd31095d9cc9af8b9d2dcdde36a1912a9\" returns successfully" Jul 2 00:11:38.049231 kubelet[2273]: E0702 00:11:38.048615 2273 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Jul 2 00:11:38.153973 kubelet[2273]: I0702 00:11:38.153936 2273 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:11:38.685036 kubelet[2273]: E0702 00:11:38.685005 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:38.686655 kubelet[2273]: E0702 00:11:38.686611 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:38.691272 kubelet[2273]: E0702 00:11:38.691251 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:39.631928 kubelet[2273]: I0702 00:11:39.631873 2273 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:11:39.652692 kubelet[2273]: E0702 00:11:39.652612 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:39.692723 kubelet[2273]: E0702 00:11:39.691664 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:39.692723 kubelet[2273]: E0702 00:11:39.692044 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:39.752767 kubelet[2273]: E0702 00:11:39.752711 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:39.853184 kubelet[2273]: E0702 00:11:39.853144 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:39.953657 kubelet[2273]: E0702 00:11:39.953486 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:40.054027 kubelet[2273]: E0702 00:11:40.053983 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:40.154158 kubelet[2273]: E0702 00:11:40.154109 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:40.254728 kubelet[2273]: E0702 00:11:40.254606 2273 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:11:40.621228 kubelet[2273]: I0702 00:11:40.621183 2273 apiserver.go:52] "Watching apiserver" Jul 2 00:11:40.647075 kubelet[2273]: I0702 00:11:40.647043 2273 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:11:40.776795 kubelet[2273]: E0702 00:11:40.776724 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:41.693784 kubelet[2273]: E0702 00:11:41.693677 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:42.421652 systemd[1]: Reloading requested from client PID 2549 ('systemctl') (unit session-7.scope)... Jul 2 00:11:42.421674 systemd[1]: Reloading... Jul 2 00:11:42.484699 zram_generator::config[2589]: No configuration found. Jul 2 00:11:42.589222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:11:42.650181 systemd[1]: Reloading finished in 228 ms. Jul 2 00:11:42.679910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:11:42.680587 kubelet[2273]: I0702 00:11:42.680538 2273 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:11:42.699698 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:11:42.700212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:42.712011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:11:42.800142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:11:42.815029 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:11:42.862313 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:11:42.862313 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:11:42.862313 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:11:42.862313 kubelet[2638]: I0702 00:11:42.861955 2638 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:11:42.867420 kubelet[2638]: I0702 00:11:42.867377 2638 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:11:42.867420 kubelet[2638]: I0702 00:11:42.867407 2638 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:11:42.868145 kubelet[2638]: I0702 00:11:42.867700 2638 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:11:42.870634 kubelet[2638]: I0702 00:11:42.869980 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:11:42.871456 kubelet[2638]: I0702 00:11:42.871421 2638 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:11:42.876580 kubelet[2638]: W0702 00:11:42.876547 2638 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:11:42.877491 kubelet[2638]: I0702 00:11:42.877371 2638 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:11:42.877809 kubelet[2638]: I0702 00:11:42.877795 2638 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:11:42.877980 kubelet[2638]: I0702 00:11:42.877967 2638 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:11:42.878099 kubelet[2638]: I0702 00:11:42.877995 2638 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:11:42.878099 kubelet[2638]: I0702 00:11:42.878004 2638 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:11:42.878099 kubelet[2638]: I0702 00:11:42.878039 2638 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:11:42.878182 kubelet[2638]: I0702 00:11:42.878120 2638 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:11:42.878182 kubelet[2638]: I0702 00:11:42.878133 2638 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:11:42.878182 kubelet[2638]: I0702 00:11:42.878157 2638 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:11:42.878182 kubelet[2638]: I0702 00:11:42.878167 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:11:42.880746 kubelet[2638]: I0702 00:11:42.879367 2638 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:11:42.880746 kubelet[2638]: I0702 00:11:42.879869 2638 server.go:1232] "Started kubelet" Jul 2 00:11:42.881600 kubelet[2638]: E0702 00:11:42.881124 2638 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:11:42.881600 kubelet[2638]: E0702 00:11:42.881162 2638 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:11:42.881600 kubelet[2638]: I0702 00:11:42.881412 2638 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:11:42.881600 kubelet[2638]: I0702 00:11:42.881416 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:11:42.881711 kubelet[2638]: I0702 00:11:42.881697 2638 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:11:42.883549 kubelet[2638]: I0702 00:11:42.881774 2638 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:11:42.883549 kubelet[2638]: I0702 00:11:42.882550 2638 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:11:42.883549 kubelet[2638]: I0702 00:11:42.882744 2638 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:11:42.883549 kubelet[2638]: I0702 00:11:42.882822 2638 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:11:42.883549 kubelet[2638]: I0702 00:11:42.882938 2638 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:11:42.920619 kubelet[2638]: I0702 00:11:42.918810 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:11:42.920619 kubelet[2638]: I0702 00:11:42.920525 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:11:42.920619 kubelet[2638]: I0702 00:11:42.920572 2638 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:11:42.920619 kubelet[2638]: I0702 00:11:42.920603 2638 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:11:42.920958 kubelet[2638]: E0702 00:11:42.920656 2638 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:11:42.934776 sudo[2669]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:11:42.935653 sudo[2669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:11:42.978368 kubelet[2638]: I0702 00:11:42.978333 2638 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:11:42.978368 kubelet[2638]: I0702 00:11:42.978358 2638 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:11:42.978368 kubelet[2638]: I0702 00:11:42.978376 2638 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:11:42.978580 kubelet[2638]: I0702 00:11:42.978523 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:11:42.978580 kubelet[2638]: I0702 00:11:42.978543 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:11:42.978580 kubelet[2638]: I0702 00:11:42.978550 2638 policy_none.go:49] "None policy: Start" Jul 2 00:11:42.979246 kubelet[2638]: I0702 00:11:42.979227 2638 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:11:42.979303 kubelet[2638]: I0702 00:11:42.979254 2638 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:11:42.979420 kubelet[2638]: I0702 00:11:42.979408 2638 state_mem.go:75] "Updated machine memory state" Jul 2 00:11:42.983901 kubelet[2638]: I0702 00:11:42.983205 2638 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:11:42.983901 kubelet[2638]: I0702 00:11:42.983469 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:11:42.988591 kubelet[2638]: I0702 00:11:42.986980 2638 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:11:43.018289 kubelet[2638]: I0702 00:11:43.018246 2638 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:11:43.018444 kubelet[2638]: I0702 00:11:43.018333 2638 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:11:43.020771 kubelet[2638]: I0702 00:11:43.020741 2638 topology_manager.go:215] "Topology Admit Handler" podUID="f6d642fa2197c6c91511eae1502c6020" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:11:43.020889 kubelet[2638]: I0702 00:11:43.020871 2638 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:11:43.020932 kubelet[2638]: I0702 00:11:43.020918 2638 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:11:43.035654 kubelet[2638]: E0702 00:11:43.035622 2638 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:43.084927 kubelet[2638]: I0702 00:11:43.084887 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6d642fa2197c6c91511eae1502c6020-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6d642fa2197c6c91511eae1502c6020\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:43.084927 kubelet[2638]: I0702 00:11:43.084939 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6d642fa2197c6c91511eae1502c6020-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6d642fa2197c6c91511eae1502c6020\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:43.085089 kubelet[2638]: I0702 00:11:43.084962 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:43.085089 kubelet[2638]: I0702 00:11:43.084995 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:43.085089 kubelet[2638]: I0702 00:11:43.085017 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6d642fa2197c6c91511eae1502c6020-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6d642fa2197c6c91511eae1502c6020\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:11:43.085089 kubelet[2638]: I0702 00:11:43.085036 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:43.085089 kubelet[2638]: I0702 00:11:43.085055 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:43.085209 kubelet[2638]: I0702 00:11:43.085075 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:11:43.085209 kubelet[2638]: I0702 00:11:43.085096 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:11:43.335515 kubelet[2638]: E0702 00:11:43.335480 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:43.337060 kubelet[2638]: E0702 00:11:43.337026 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:43.337435 kubelet[2638]: E0702 00:11:43.337420 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:43.395902 sudo[2669]: pam_unix(sudo:session): session closed for user root Jul 2 00:11:43.879330 kubelet[2638]: I0702 00:11:43.879282 2638 apiserver.go:52] "Watching apiserver" Jul 2 00:11:43.884045 kubelet[2638]: I0702 00:11:43.883999 2638 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:11:43.937137 kubelet[2638]: E0702 00:11:43.937094 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:43.937982 kubelet[2638]: E0702 00:11:43.937956 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:43.938473 kubelet[2638]: E0702 00:11:43.938445 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:43.978981 kubelet[2638]: I0702 00:11:43.978930 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.978869317 podCreationTimestamp="2024-07-02 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:11:43.960983406 +0000 UTC m=+1.142470422" watchObservedRunningTime="2024-07-02 00:11:43.978869317 +0000 UTC m=+1.160356333" Jul 2 00:11:43.991003 kubelet[2638]: I0702 00:11:43.990774 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.9906713209999998 podCreationTimestamp="2024-07-02 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:11:43.979025064 +0000 UTC m=+1.160512080" watchObservedRunningTime="2024-07-02 00:11:43.990671321 +0000 UTC m=+1.172158337" Jul 2 00:11:43.991003 kubelet[2638]: I0702 00:11:43.990914 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.990896516 podCreationTimestamp="2024-07-02 00:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:11:43.99087266 +0000 UTC m=+1.172359676" watchObservedRunningTime="2024-07-02 00:11:43.990896516 +0000 UTC m=+1.172383532" Jul 2 00:11:44.940636 kubelet[2638]: E0702 00:11:44.938896 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:45.490669 sudo[1755]: pam_unix(sudo:session): session closed for user root Jul 2 00:11:45.493421 sshd[1749]: pam_unix(sshd:session): session closed for user core Jul 2 00:11:45.498610 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:45968.service: Deactivated successfully. Jul 2 00:11:45.501612 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:11:45.502457 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:11:45.503617 systemd-logind[1524]: Removed session 7. Jul 2 00:11:45.939488 kubelet[2638]: E0702 00:11:45.939460 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:46.339860 kubelet[2638]: E0702 00:11:46.339821 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:50.552441 kubelet[2638]: E0702 00:11:50.552072 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:50.947387 kubelet[2638]: E0702 00:11:50.947272 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:54.029742 kubelet[2638]: E0702 00:11:54.028824 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:54.955691 kubelet[2638]: E0702 00:11:54.955664 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:56.347753 kubelet[2638]: E0702 00:11:56.347722 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:56.363766 kubelet[2638]: I0702 00:11:56.362406 2638 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:11:56.363995 containerd[1546]: time="2024-07-02T00:11:56.363935108Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:11:56.364388 kubelet[2638]: I0702 00:11:56.364127 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:11:56.738793 kubelet[2638]: I0702 00:11:56.737817 2638 topology_manager.go:215] "Topology Admit Handler" podUID="6931071c-58e4-495c-a819-3f45fa55f033" podNamespace="kube-system" podName="kube-proxy-trn52" Jul 2 00:11:56.746746 kubelet[2638]: I0702 00:11:56.746473 2638 topology_manager.go:215] "Topology Admit Handler" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" podNamespace="kube-system" podName="cilium-s58zd" Jul 2 00:11:56.775661 kubelet[2638]: I0702 00:11:56.775609 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6931071c-58e4-495c-a819-3f45fa55f033-kube-proxy\") pod \"kube-proxy-trn52\" (UID: \"6931071c-58e4-495c-a819-3f45fa55f033\") " pod="kube-system/kube-proxy-trn52" Jul 2 00:11:56.775661 kubelet[2638]: I0702 00:11:56.775661 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-run\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776036 kubelet[2638]: I0702 00:11:56.775684 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cni-path\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776036 kubelet[2638]: I0702 00:11:56.775703 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-xtables-lock\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776036 kubelet[2638]: I0702 00:11:56.775741 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-hostproc\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776036 kubelet[2638]: I0702 00:11:56.775761 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-cgroup\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776542 kubelet[2638]: I0702 00:11:56.776187 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-lib-modules\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776542 kubelet[2638]: I0702 00:11:56.776233 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8fr\" (UniqueName: \"kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-kube-api-access-zb8fr\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776542 kubelet[2638]: I0702 00:11:56.776256 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-etc-cni-netd\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776542 kubelet[2638]: I0702 00:11:56.776276 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea1ad5ef-1ff9-4520-9401-86beb135399d-clustermesh-secrets\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776542 kubelet[2638]: I0702 00:11:56.776295 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6931071c-58e4-495c-a819-3f45fa55f033-lib-modules\") pod \"kube-proxy-trn52\" (UID: \"6931071c-58e4-495c-a819-3f45fa55f033\") " pod="kube-system/kube-proxy-trn52" Jul 2 00:11:56.776710 kubelet[2638]: I0702 00:11:56.776315 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6931071c-58e4-495c-a819-3f45fa55f033-xtables-lock\") pod \"kube-proxy-trn52\" (UID: \"6931071c-58e4-495c-a819-3f45fa55f033\") " pod="kube-system/kube-proxy-trn52" Jul 2 00:11:56.776710 kubelet[2638]: I0702 00:11:56.776335 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-config-path\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776710 kubelet[2638]: I0702 00:11:56.776358 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-kernel\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776710 kubelet[2638]: I0702 00:11:56.776385 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2vb\" (UniqueName: \"kubernetes.io/projected/6931071c-58e4-495c-a819-3f45fa55f033-kube-api-access-xj2vb\") pod \"kube-proxy-trn52\" (UID: \"6931071c-58e4-495c-a819-3f45fa55f033\") " pod="kube-system/kube-proxy-trn52" Jul 2 00:11:56.776710 kubelet[2638]: I0702 00:11:56.776407 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-bpf-maps\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776814 kubelet[2638]: I0702 00:11:56.776426 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-net\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.776814 kubelet[2638]: I0702 00:11:56.776447 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-hubble-tls\") pod \"cilium-s58zd\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " pod="kube-system/cilium-s58zd" Jul 2 00:11:56.892277 kubelet[2638]: E0702 00:11:56.891048 2638 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:11:56.892277 kubelet[2638]: E0702 00:11:56.891092 2638 projected.go:198] Error preparing data for projected volume kube-api-access-zb8fr for pod kube-system/cilium-s58zd: configmap "kube-root-ca.crt" not found Jul 2 00:11:56.892277 kubelet[2638]: E0702 00:11:56.891171 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-kube-api-access-zb8fr podName:ea1ad5ef-1ff9-4520-9401-86beb135399d nodeName:}" failed. No retries permitted until 2024-07-02 00:11:57.391137581 +0000 UTC m=+14.572624597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zb8fr" (UniqueName: "kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-kube-api-access-zb8fr") pod "cilium-s58zd" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d") : configmap "kube-root-ca.crt" not found Jul 2 00:11:56.892277 kubelet[2638]: E0702 00:11:56.891645 2638 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:11:56.892277 kubelet[2638]: E0702 00:11:56.891669 2638 projected.go:198] Error preparing data for projected volume kube-api-access-xj2vb for pod kube-system/kube-proxy-trn52: configmap "kube-root-ca.crt" not found Jul 2 00:11:56.892277 kubelet[2638]: E0702 00:11:56.891728 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6931071c-58e4-495c-a819-3f45fa55f033-kube-api-access-xj2vb podName:6931071c-58e4-495c-a819-3f45fa55f033 nodeName:}" failed. No retries permitted until 2024-07-02 00:11:57.3917136 +0000 UTC m=+14.573200576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xj2vb" (UniqueName: "kubernetes.io/projected/6931071c-58e4-495c-a819-3f45fa55f033-kube-api-access-xj2vb") pod "kube-proxy-trn52" (UID: "6931071c-58e4-495c-a819-3f45fa55f033") : configmap "kube-root-ca.crt" not found Jul 2 00:11:57.344592 kubelet[2638]: I0702 00:11:57.344250 2638 topology_manager.go:215] "Topology Admit Handler" podUID="017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-qgr5b" Jul 2 00:11:57.383626 kubelet[2638]: I0702 00:11:57.383580 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-qgr5b\" (UID: \"017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e\") " pod="kube-system/cilium-operator-6bc8ccdb58-qgr5b" Jul 2 00:11:57.383626 kubelet[2638]: I0702 00:11:57.383642 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b4ld\" (UniqueName: \"kubernetes.io/projected/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-kube-api-access-2b4ld\") pod \"cilium-operator-6bc8ccdb58-qgr5b\" (UID: \"017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e\") " pod="kube-system/cilium-operator-6bc8ccdb58-qgr5b" Jul 2 00:11:57.642695 kubelet[2638]: E0702 00:11:57.642359 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:57.643016 containerd[1546]: time="2024-07-02T00:11:57.642972542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-trn52,Uid:6931071c-58e4-495c-a819-3f45fa55f033,Namespace:kube-system,Attempt:0,}" Jul 2 00:11:57.647781 kubelet[2638]: E0702 00:11:57.647746 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:57.648371 containerd[1546]: time="2024-07-02T00:11:57.648335807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qgr5b,Uid:017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e,Namespace:kube-system,Attempt:0,}" Jul 2 00:11:57.649829 kubelet[2638]: E0702 00:11:57.649540 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:57.650196 containerd[1546]: time="2024-07-02T00:11:57.650063357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s58zd,Uid:ea1ad5ef-1ff9-4520-9401-86beb135399d,Namespace:kube-system,Attempt:0,}" Jul 2 00:11:57.866761 containerd[1546]: time="2024-07-02T00:11:57.866644933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:11:57.866761 containerd[1546]: time="2024-07-02T00:11:57.866722316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:57.866761 containerd[1546]: time="2024-07-02T00:11:57.866739561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:11:57.866761 containerd[1546]: time="2024-07-02T00:11:57.866749925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:57.872088 containerd[1546]: time="2024-07-02T00:11:57.871972427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:11:57.872088 containerd[1546]: time="2024-07-02T00:11:57.872043168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:57.872340 containerd[1546]: time="2024-07-02T00:11:57.872198014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:11:57.872340 containerd[1546]: time="2024-07-02T00:11:57.872234545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:57.878450 containerd[1546]: time="2024-07-02T00:11:57.878318342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:11:57.878450 containerd[1546]: time="2024-07-02T00:11:57.878394284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:57.878921 containerd[1546]: time="2024-07-02T00:11:57.878788601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:11:57.879660 containerd[1546]: time="2024-07-02T00:11:57.879414265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:11:57.933830 containerd[1546]: time="2024-07-02T00:11:57.933422899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qgr5b,Uid:017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220\"" Jul 2 00:11:57.934256 containerd[1546]: time="2024-07-02T00:11:57.934228537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-trn52,Uid:6931071c-58e4-495c-a819-3f45fa55f033,Namespace:kube-system,Attempt:0,} returns sandbox id \"a85a7ef726f38f8996c146528f7ac94e32875830147b264701d3902e701ad061\"" Jul 2 00:11:57.934676 kubelet[2638]: E0702 00:11:57.934653 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:57.936482 kubelet[2638]: E0702 00:11:57.936441 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:57.937902 containerd[1546]: time="2024-07-02T00:11:57.937781947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s58zd,Uid:ea1ad5ef-1ff9-4520-9401-86beb135399d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\"" Jul 2 00:11:57.941571 containerd[1546]: time="2024-07-02T00:11:57.938273612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:11:57.942860 kubelet[2638]: E0702 00:11:57.942124 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:57.942936 containerd[1546]: time="2024-07-02T00:11:57.942705441Z" level=info msg="CreateContainer within sandbox \"a85a7ef726f38f8996c146528f7ac94e32875830147b264701d3902e701ad061\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:11:58.049175 containerd[1546]: time="2024-07-02T00:11:58.049117443Z" level=info msg="CreateContainer within sandbox \"a85a7ef726f38f8996c146528f7ac94e32875830147b264701d3902e701ad061\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96c76a4300d803f428bff93737cf7fffd959a48b3826e2221ad55eda48871d15\"" Jul 2 00:11:58.050097 containerd[1546]: time="2024-07-02T00:11:58.050048584Z" level=info msg="StartContainer for \"96c76a4300d803f428bff93737cf7fffd959a48b3826e2221ad55eda48871d15\"" Jul 2 00:11:58.095729 containerd[1546]: time="2024-07-02T00:11:58.095680583Z" level=info msg="StartContainer for \"96c76a4300d803f428bff93737cf7fffd959a48b3826e2221ad55eda48871d15\" returns successfully" Jul 2 00:11:58.967634 kubelet[2638]: E0702 00:11:58.967575 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:11:58.981214 kubelet[2638]: I0702 00:11:58.980865 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-trn52" podStartSLOduration=2.980813973 podCreationTimestamp="2024-07-02 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:11:58.979443348 +0000 UTC m=+16.160930324" watchObservedRunningTime="2024-07-02 00:11:58.980813973 +0000 UTC m=+16.162300989" Jul 2 00:11:59.110211 update_engine[1535]: I0702 00:11:59.110167 1535 update_attempter.cc:509] Updating boot flags... Jul 2 00:11:59.144673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3015) Jul 2 00:11:59.240792 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3018) Jul 2 00:11:59.249177 containerd[1546]: time="2024-07-02T00:11:59.249127318Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:59.250128 containerd[1546]: time="2024-07-02T00:11:59.250092775Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138354" Jul 2 00:11:59.252022 containerd[1546]: time="2024-07-02T00:11:59.251983519Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:11:59.255179 containerd[1546]: time="2024-07-02T00:11:59.255112513Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.316796529s" Jul 2 00:11:59.255179 containerd[1546]: time="2024-07-02T00:11:59.255171289Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:11:59.256872 containerd[1546]: time="2024-07-02T00:11:59.256613313Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:11:59.258195 containerd[1546]: time="2024-07-02T00:11:59.258155044Z" level=info msg="CreateContainer within sandbox \"20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:11:59.274738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3018) Jul 2 00:11:59.278362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656697590.mount: Deactivated successfully. Jul 2 00:11:59.281509 containerd[1546]: time="2024-07-02T00:11:59.281467817Z" level=info msg="CreateContainer within sandbox \"20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\"" Jul 2 00:11:59.285864 containerd[1546]: time="2024-07-02T00:11:59.285818257Z" level=info msg="StartContainer for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\"" Jul 2 00:11:59.339145 containerd[1546]: time="2024-07-02T00:11:59.339036960Z" level=info msg="StartContainer for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" returns successfully" Jul 2 00:12:00.017905 kubelet[2638]: E0702 00:12:00.017257 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:00.033587 kubelet[2638]: E0702 00:12:00.019479 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:01.007748 kubelet[2638]: E0702 00:12:01.007718 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:01.910510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509814287.mount: Deactivated successfully. Jul 2 00:12:02.937216 kubelet[2638]: I0702 00:12:02.937164 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-qgr5b" podStartSLOduration=4.618098037 podCreationTimestamp="2024-07-02 00:11:57 +0000 UTC" firstStartedPulling="2024-07-02 00:11:57.936644771 +0000 UTC m=+15.118131787" lastFinishedPulling="2024-07-02 00:11:59.25566406 +0000 UTC m=+16.437151076" observedRunningTime="2024-07-02 00:12:00.064873057 +0000 UTC m=+17.246360113" watchObservedRunningTime="2024-07-02 00:12:02.937117326 +0000 UTC m=+20.118604342" Jul 2 00:12:03.630522 containerd[1546]: time="2024-07-02T00:12:03.630471252Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:03.631867 containerd[1546]: time="2024-07-02T00:12:03.631086907Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651522" Jul 2 00:12:03.632079 containerd[1546]: time="2024-07-02T00:12:03.632029393Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:03.635578 containerd[1546]: time="2024-07-02T00:12:03.634333297Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.377681374s" Jul 2 00:12:03.635578 containerd[1546]: time="2024-07-02T00:12:03.634382628Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:12:03.637498 containerd[1546]: time="2024-07-02T00:12:03.637264258Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:12:03.652299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333647803.mount: Deactivated successfully. Jul 2 00:12:03.653352 containerd[1546]: time="2024-07-02T00:12:03.653124089Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\"" Jul 2 00:12:03.654371 containerd[1546]: time="2024-07-02T00:12:03.653796916Z" level=info msg="StartContainer for \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\"" Jul 2 00:12:03.704424 containerd[1546]: time="2024-07-02T00:12:03.703150516Z" level=info msg="StartContainer for \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\" returns successfully" Jul 2 00:12:04.003745 containerd[1546]: time="2024-07-02T00:12:04.002651552Z" level=info msg="shim disconnected" id=c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e namespace=k8s.io Jul 2 00:12:04.003745 containerd[1546]: time="2024-07-02T00:12:04.002728288Z" level=warning msg="cleaning up after shim disconnected" id=c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e namespace=k8s.io Jul 2 00:12:04.003745 containerd[1546]: time="2024-07-02T00:12:04.002738851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:12:04.017936 kubelet[2638]: E0702 00:12:04.017673 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:04.649598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e-rootfs.mount: Deactivated successfully. Jul 2 00:12:05.020703 kubelet[2638]: E0702 00:12:05.019803 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:05.031784 containerd[1546]: time="2024-07-02T00:12:05.026744640Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:12:05.067990 containerd[1546]: time="2024-07-02T00:12:05.067852429Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\"" Jul 2 00:12:05.068522 containerd[1546]: time="2024-07-02T00:12:05.068360571Z" level=info msg="StartContainer for \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\"" Jul 2 00:12:05.113830 containerd[1546]: time="2024-07-02T00:12:05.113765736Z" level=info msg="StartContainer for \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\" returns successfully" Jul 2 00:12:05.145878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:12:05.146358 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:12:05.146434 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:12:05.153022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:12:05.171505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:12:05.188165 containerd[1546]: time="2024-07-02T00:12:05.188052975Z" level=info msg="shim disconnected" id=024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388 namespace=k8s.io Jul 2 00:12:05.188165 containerd[1546]: time="2024-07-02T00:12:05.188128030Z" level=warning msg="cleaning up after shim disconnected" id=024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388 namespace=k8s.io Jul 2 00:12:05.188165 containerd[1546]: time="2024-07-02T00:12:05.188138672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:12:05.649891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388-rootfs.mount: Deactivated successfully. Jul 2 00:12:06.024069 kubelet[2638]: E0702 00:12:06.023434 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:06.031806 containerd[1546]: time="2024-07-02T00:12:06.031596829Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:12:06.069294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141403977.mount: Deactivated successfully. Jul 2 00:12:06.072368 containerd[1546]: time="2024-07-02T00:12:06.072325741Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\"" Jul 2 00:12:06.076068 containerd[1546]: time="2024-07-02T00:12:06.072993588Z" level=info msg="StartContainer for \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\"" Jul 2 00:12:06.124393 containerd[1546]: time="2024-07-02T00:12:06.124340720Z" level=info msg="StartContainer for \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\" returns successfully" Jul 2 00:12:06.201526 containerd[1546]: time="2024-07-02T00:12:06.201420430Z" level=info msg="shim disconnected" id=7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337 namespace=k8s.io Jul 2 00:12:06.201526 containerd[1546]: time="2024-07-02T00:12:06.201514368Z" level=warning msg="cleaning up after shim disconnected" id=7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337 namespace=k8s.io Jul 2 00:12:06.201526 containerd[1546]: time="2024-07-02T00:12:06.201526650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:12:06.649679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337-rootfs.mount: Deactivated successfully. Jul 2 00:12:07.027119 kubelet[2638]: E0702 00:12:07.027050 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:07.031729 containerd[1546]: time="2024-07-02T00:12:07.029268025Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:12:07.056328 containerd[1546]: time="2024-07-02T00:12:07.056269059Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\"" Jul 2 00:12:07.059014 containerd[1546]: time="2024-07-02T00:12:07.058961148Z" level=info msg="StartContainer for \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\"" Jul 2 00:12:07.107687 containerd[1546]: time="2024-07-02T00:12:07.107630045Z" level=info msg="StartContainer for \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\" returns successfully" Jul 2 00:12:07.123781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16-rootfs.mount: Deactivated successfully. Jul 2 00:12:07.132245 containerd[1546]: time="2024-07-02T00:12:07.132175272Z" level=info msg="shim disconnected" id=1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16 namespace=k8s.io Jul 2 00:12:07.132245 containerd[1546]: time="2024-07-02T00:12:07.132231242Z" level=warning msg="cleaning up after shim disconnected" id=1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16 namespace=k8s.io Jul 2 00:12:07.132245 containerd[1546]: time="2024-07-02T00:12:07.132243204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:12:08.030848 kubelet[2638]: E0702 00:12:08.030784 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:08.033876 containerd[1546]: time="2024-07-02T00:12:08.033836619Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:12:08.056163 containerd[1546]: time="2024-07-02T00:12:08.056101616Z" level=info msg="CreateContainer within sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\"" Jul 2 00:12:08.056750 containerd[1546]: time="2024-07-02T00:12:08.056687358Z" level=info msg="StartContainer for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\"" Jul 2 00:12:08.112037 containerd[1546]: time="2024-07-02T00:12:08.111996191Z" level=info msg="StartContainer for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" returns successfully" Jul 2 00:12:08.220588 kubelet[2638]: I0702 00:12:08.220325 2638 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:12:08.239570 kubelet[2638]: I0702 00:12:08.239524 2638 topology_manager.go:215] "Topology Admit Handler" podUID="494e11b6-9a48-4d69-ad2e-af504afc2a0a" podNamespace="kube-system" podName="coredns-5dd5756b68-jc5m4" Jul 2 00:12:08.242334 kubelet[2638]: I0702 00:12:08.242296 2638 topology_manager.go:215] "Topology Admit Handler" podUID="5b6af78c-71ab-4a94-9587-2d9dd3f62d96" podNamespace="kube-system" podName="coredns-5dd5756b68-cfqfk" Jul 2 00:12:08.274571 kubelet[2638]: I0702 00:12:08.274525 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/494e11b6-9a48-4d69-ad2e-af504afc2a0a-config-volume\") pod \"coredns-5dd5756b68-jc5m4\" (UID: \"494e11b6-9a48-4d69-ad2e-af504afc2a0a\") " pod="kube-system/coredns-5dd5756b68-jc5m4" Jul 2 00:12:08.274571 kubelet[2638]: I0702 00:12:08.274584 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b6af78c-71ab-4a94-9587-2d9dd3f62d96-config-volume\") pod \"coredns-5dd5756b68-cfqfk\" (UID: \"5b6af78c-71ab-4a94-9587-2d9dd3f62d96\") " pod="kube-system/coredns-5dd5756b68-cfqfk" Jul 2 00:12:08.274737 kubelet[2638]: I0702 00:12:08.274610 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46f62\" (UniqueName: \"kubernetes.io/projected/494e11b6-9a48-4d69-ad2e-af504afc2a0a-kube-api-access-46f62\") pod \"coredns-5dd5756b68-jc5m4\" (UID: \"494e11b6-9a48-4d69-ad2e-af504afc2a0a\") " pod="kube-system/coredns-5dd5756b68-jc5m4" Jul 2 00:12:08.274737 kubelet[2638]: I0702 00:12:08.274636 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznwb\" (UniqueName: \"kubernetes.io/projected/5b6af78c-71ab-4a94-9587-2d9dd3f62d96-kube-api-access-sznwb\") pod \"coredns-5dd5756b68-cfqfk\" (UID: \"5b6af78c-71ab-4a94-9587-2d9dd3f62d96\") " pod="kube-system/coredns-5dd5756b68-cfqfk" Jul 2 00:12:08.549326 kubelet[2638]: E0702 00:12:08.549288 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:08.552784 containerd[1546]: time="2024-07-02T00:12:08.550767567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jc5m4,Uid:494e11b6-9a48-4d69-ad2e-af504afc2a0a,Namespace:kube-system,Attempt:0,}" Jul 2 00:12:08.556934 kubelet[2638]: E0702 00:12:08.556648 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:08.557797 containerd[1546]: time="2024-07-02T00:12:08.557743982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cfqfk,Uid:5b6af78c-71ab-4a94-9587-2d9dd3f62d96,Namespace:kube-system,Attempt:0,}" Jul 2 00:12:09.038384 kubelet[2638]: E0702 00:12:09.038301 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:09.054848 kubelet[2638]: I0702 00:12:09.053750 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-s58zd" podStartSLOduration=7.360425507 podCreationTimestamp="2024-07-02 00:11:56 +0000 UTC" firstStartedPulling="2024-07-02 00:11:57.942581044 +0000 UTC m=+15.124068060" lastFinishedPulling="2024-07-02 00:12:03.635851149 +0000 UTC m=+20.817338165" observedRunningTime="2024-07-02 00:12:09.053222933 +0000 UTC m=+26.234709989" watchObservedRunningTime="2024-07-02 00:12:09.053695612 +0000 UTC m=+26.235182628" Jul 2 00:12:10.039607 kubelet[2638]: E0702 00:12:10.039504 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:10.337090 systemd-networkd[1242]: cilium_host: Link UP Jul 2 00:12:10.337214 systemd-networkd[1242]: cilium_net: Link UP Jul 2 00:12:10.337217 systemd-networkd[1242]: cilium_net: Gained carrier Jul 2 00:12:10.337341 systemd-networkd[1242]: cilium_host: Gained carrier Jul 2 00:12:10.337509 systemd-networkd[1242]: cilium_host: Gained IPv6LL Jul 2 00:12:10.418691 systemd-networkd[1242]: cilium_vxlan: Link UP Jul 2 00:12:10.418696 systemd-networkd[1242]: cilium_vxlan: Gained carrier Jul 2 00:12:10.742592 kernel: NET: Registered PF_ALG protocol family Jul 2 00:12:11.041718 kubelet[2638]: E0702 00:12:11.041472 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:11.357634 systemd-networkd[1242]: cilium_net: Gained IPv6LL Jul 2 00:12:11.364391 systemd-networkd[1242]: lxc_health: Link UP Jul 2 00:12:11.385047 systemd-networkd[1242]: lxc_health: Gained carrier Jul 2 00:12:11.680112 systemd-networkd[1242]: lxce62c0c1715c0: Link UP Jul 2 00:12:11.694930 systemd-networkd[1242]: lxc309ce6f5746d: Link UP Jul 2 00:12:11.703591 kernel: eth0: renamed from tmpcdc13 Jul 2 00:12:11.709610 kernel: eth0: renamed from tmp87366 Jul 2 00:12:11.716090 systemd-networkd[1242]: lxce62c0c1715c0: Gained carrier Jul 2 00:12:11.717361 systemd-networkd[1242]: lxc309ce6f5746d: Gained carrier Jul 2 00:12:12.044969 kubelet[2638]: E0702 00:12:12.043900 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:12.055013 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Jul 2 00:12:13.015103 systemd-networkd[1242]: lxc309ce6f5746d: Gained IPv6LL Jul 2 00:12:13.015383 systemd-networkd[1242]: lxce62c0c1715c0: Gained IPv6LL Jul 2 00:12:13.044663 kubelet[2638]: E0702 00:12:13.044635 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:13.078943 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jul 2 00:12:15.468305 containerd[1546]: time="2024-07-02T00:12:15.468039308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:12:15.468305 containerd[1546]: time="2024-07-02T00:12:15.468120359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:15.468305 containerd[1546]: time="2024-07-02T00:12:15.468139161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:12:15.468305 containerd[1546]: time="2024-07-02T00:12:15.468173606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:15.468818 containerd[1546]: time="2024-07-02T00:12:15.468493048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:12:15.468818 containerd[1546]: time="2024-07-02T00:12:15.468539414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:15.469009 containerd[1546]: time="2024-07-02T00:12:15.468918944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:12:15.469009 containerd[1546]: time="2024-07-02T00:12:15.468936346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:15.494341 systemd-resolved[1448]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:12:15.497651 systemd-resolved[1448]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:12:15.518127 containerd[1546]: time="2024-07-02T00:12:15.517922189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jc5m4,Uid:494e11b6-9a48-4d69-ad2e-af504afc2a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"87366942a394751ebf5419aa66a1f2ea56ad8983703d191a9b1bb825870b62f2\"" Jul 2 00:12:15.518127 containerd[1546]: time="2024-07-02T00:12:15.518057447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cfqfk,Uid:5b6af78c-71ab-4a94-9587-2d9dd3f62d96,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdc134acc416779d5cfc9b66318a75c87e283245f336d79b5ccdd7ceff9e9373\"" Jul 2 00:12:15.519065 kubelet[2638]: E0702 00:12:15.519038 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:15.519424 kubelet[2638]: E0702 00:12:15.519073 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:15.524207 containerd[1546]: time="2024-07-02T00:12:15.521885990Z" level=info msg="CreateContainer within sandbox \"87366942a394751ebf5419aa66a1f2ea56ad8983703d191a9b1bb825870b62f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:12:15.530430 containerd[1546]: time="2024-07-02T00:12:15.530325620Z" level=info msg="CreateContainer within sandbox \"cdc134acc416779d5cfc9b66318a75c87e283245f336d79b5ccdd7ceff9e9373\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:12:15.546965 containerd[1546]: time="2024-07-02T00:12:15.546902361Z" level=info msg="CreateContainer within sandbox \"87366942a394751ebf5419aa66a1f2ea56ad8983703d191a9b1bb825870b62f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f016d0e9848f36db957a0ef2aa7fa3e374229385f6f6a887dcc7873277f019a1\"" Jul 2 00:12:15.548076 containerd[1546]: time="2024-07-02T00:12:15.548017267Z" level=info msg="StartContainer for \"f016d0e9848f36db957a0ef2aa7fa3e374229385f6f6a887dcc7873277f019a1\"" Jul 2 00:12:15.554584 containerd[1546]: time="2024-07-02T00:12:15.552870545Z" level=info msg="CreateContainer within sandbox \"cdc134acc416779d5cfc9b66318a75c87e283245f336d79b5ccdd7ceff9e9373\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f902d17d3bac5eda619316c2adb847b2305ca796a9992ff1372579e5c4e4d26\"" Jul 2 00:12:15.554584 containerd[1546]: time="2024-07-02T00:12:15.553433940Z" level=info msg="StartContainer for \"5f902d17d3bac5eda619316c2adb847b2305ca796a9992ff1372579e5c4e4d26\"" Jul 2 00:12:15.619881 containerd[1546]: time="2024-07-02T00:12:15.619833073Z" level=info msg="StartContainer for \"5f902d17d3bac5eda619316c2adb847b2305ca796a9992ff1372579e5c4e4d26\" returns successfully" Jul 2 00:12:15.620283 containerd[1546]: time="2024-07-02T00:12:15.619842394Z" level=info msg="StartContainer for \"f016d0e9848f36db957a0ef2aa7fa3e374229385f6f6a887dcc7873277f019a1\" returns successfully" Jul 2 00:12:16.052305 kubelet[2638]: E0702 00:12:16.052279 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:16.054458 kubelet[2638]: E0702 00:12:16.054339 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:16.064960 kubelet[2638]: I0702 00:12:16.064910 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jc5m4" podStartSLOduration=19.064866388 podCreationTimestamp="2024-07-02 00:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:12:16.062376552 +0000 UTC m=+33.243863568" watchObservedRunningTime="2024-07-02 00:12:16.064866388 +0000 UTC m=+33.246353364" Jul 2 00:12:16.078576 kubelet[2638]: I0702 00:12:16.078509 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cfqfk" podStartSLOduration=19.078467233 podCreationTimestamp="2024-07-02 00:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:12:16.078325455 +0000 UTC m=+33.259812471" watchObservedRunningTime="2024-07-02 00:12:16.078467233 +0000 UTC m=+33.259954249" Jul 2 00:12:17.056632 kubelet[2638]: E0702 00:12:17.056197 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:17.316809 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:32858.service - OpenSSH per-connection server daemon (10.0.0.1:32858). Jul 2 00:12:17.361678 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 32858 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:17.363484 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:17.367952 systemd-logind[1524]: New session 8 of user core. Jul 2 00:12:17.377880 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:12:17.605717 sshd[4031]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:17.610072 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:32858.service: Deactivated successfully. Jul 2 00:12:17.612827 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:12:17.612828 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:12:17.614190 systemd-logind[1524]: Removed session 8. Jul 2 00:12:18.057964 kubelet[2638]: E0702 00:12:18.057937 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:18.551173 kubelet[2638]: E0702 00:12:18.551142 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:19.059845 kubelet[2638]: E0702 00:12:19.059808 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:22.627882 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:51306.service - OpenSSH per-connection server daemon (10.0.0.1:51306). Jul 2 00:12:22.673859 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 51306 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:22.675334 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:22.682173 systemd-logind[1524]: New session 9 of user core. Jul 2 00:12:22.698935 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:12:22.817769 sshd[4051]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:22.820924 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:51306.service: Deactivated successfully. Jul 2 00:12:22.824986 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:12:22.825882 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:12:22.827322 systemd-logind[1524]: Removed session 9. Jul 2 00:12:27.837877 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:51318.service - OpenSSH per-connection server daemon (10.0.0.1:51318). Jul 2 00:12:27.887744 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 51318 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:27.889950 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:27.899608 systemd-logind[1524]: New session 10 of user core. Jul 2 00:12:27.913023 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:12:28.040856 sshd[4068]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:28.043749 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:51318.service: Deactivated successfully. Jul 2 00:12:28.047003 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:12:28.047273 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:12:28.049297 systemd-logind[1524]: Removed session 10. Jul 2 00:12:33.054867 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:48400.service - OpenSSH per-connection server daemon (10.0.0.1:48400). Jul 2 00:12:33.091617 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 48400 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:33.094288 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:33.100791 systemd-logind[1524]: New session 11 of user core. Jul 2 00:12:33.107914 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:12:33.232897 sshd[4086]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:33.249921 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Jul 2 00:12:33.250390 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:48400.service: Deactivated successfully. Jul 2 00:12:33.252132 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:12:33.253847 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:12:33.257991 systemd-logind[1524]: Removed session 11. Jul 2 00:12:33.292413 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:33.294231 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:33.298642 systemd-logind[1524]: New session 12 of user core. Jul 2 00:12:33.307857 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:12:34.088084 sshd[4099]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:34.098717 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:48422.service - OpenSSH per-connection server daemon (10.0.0.1:48422). Jul 2 00:12:34.100516 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:48410.service: Deactivated successfully. Jul 2 00:12:34.112992 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:12:34.118106 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:12:34.119997 systemd-logind[1524]: Removed session 12. Jul 2 00:12:34.151051 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 48422 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:34.152513 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:34.157117 systemd-logind[1524]: New session 13 of user core. Jul 2 00:12:34.169898 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:12:34.302806 sshd[4113]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:34.306496 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:48422.service: Deactivated successfully. Jul 2 00:12:34.308911 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:12:34.308987 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:12:34.311467 systemd-logind[1524]: Removed session 13. Jul 2 00:12:39.316863 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:48434.service - OpenSSH per-connection server daemon (10.0.0.1:48434). Jul 2 00:12:39.353284 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 48434 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:39.354844 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:39.361491 systemd-logind[1524]: New session 14 of user core. Jul 2 00:12:39.371956 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:12:39.508515 sshd[4131]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:39.514322 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:48434.service: Deactivated successfully. Jul 2 00:12:39.519212 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:12:39.519523 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:12:39.521270 systemd-logind[1524]: Removed session 14. Jul 2 00:12:44.522904 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:50818.service - OpenSSH per-connection server daemon (10.0.0.1:50818). Jul 2 00:12:44.560221 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 50818 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:44.561582 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:44.569493 systemd-logind[1524]: New session 15 of user core. Jul 2 00:12:44.581862 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:12:44.715195 sshd[4148]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:44.729884 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:50822.service - OpenSSH per-connection server daemon (10.0.0.1:50822). Jul 2 00:12:44.730373 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:50818.service: Deactivated successfully. Jul 2 00:12:44.732057 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:12:44.733678 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:12:44.735864 systemd-logind[1524]: Removed session 15. Jul 2 00:12:44.768984 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 50822 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:44.770400 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:44.776042 systemd-logind[1524]: New session 16 of user core. Jul 2 00:12:44.786852 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:12:45.015076 sshd[4160]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:45.021961 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:50824.service - OpenSSH per-connection server daemon (10.0.0.1:50824). Jul 2 00:12:45.022378 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:50822.service: Deactivated successfully. Jul 2 00:12:45.024878 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:12:45.025513 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:12:45.027159 systemd-logind[1524]: Removed session 16. Jul 2 00:12:45.065241 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 50824 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:45.066918 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:45.071697 systemd-logind[1524]: New session 17 of user core. Jul 2 00:12:45.080823 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:12:45.865370 sshd[4173]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:45.873814 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:50832.service - OpenSSH per-connection server daemon (10.0.0.1:50832). Jul 2 00:12:45.874198 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:50824.service: Deactivated successfully. Jul 2 00:12:45.881645 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:12:45.882650 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:12:45.887918 systemd-logind[1524]: Removed session 17. Jul 2 00:12:45.917859 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 50832 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:45.919212 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:45.924250 systemd-logind[1524]: New session 18 of user core. Jul 2 00:12:45.929871 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:12:46.247464 sshd[4194]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:46.258818 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:50844.service - OpenSSH per-connection server daemon (10.0.0.1:50844). Jul 2 00:12:46.259249 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:50832.service: Deactivated successfully. Jul 2 00:12:46.260957 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:12:46.263526 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:12:46.265362 systemd-logind[1524]: Removed session 18. Jul 2 00:12:46.293790 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 50844 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:46.295454 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:46.300701 systemd-logind[1524]: New session 19 of user core. Jul 2 00:12:46.305804 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:12:46.431362 sshd[4208]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:46.434340 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:12:46.434604 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:50844.service: Deactivated successfully. Jul 2 00:12:46.437695 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:12:46.438770 systemd-logind[1524]: Removed session 19. Jul 2 00:12:51.446846 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:45442.service - OpenSSH per-connection server daemon (10.0.0.1:45442). Jul 2 00:12:51.489713 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 45442 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:51.491550 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:51.495795 systemd-logind[1524]: New session 20 of user core. Jul 2 00:12:51.507874 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:12:51.627458 sshd[4228]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:51.630821 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:45442.service: Deactivated successfully. Jul 2 00:12:51.633118 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:12:51.633249 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:12:51.634641 systemd-logind[1524]: Removed session 20. Jul 2 00:12:56.641835 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:45450.service - OpenSSH per-connection server daemon (10.0.0.1:45450). Jul 2 00:12:56.690843 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 45450 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:12:56.692388 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:56.699955 systemd-logind[1524]: New session 21 of user core. Jul 2 00:12:56.713867 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:12:56.839939 sshd[4243]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:56.845495 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:45450.service: Deactivated successfully. Jul 2 00:12:56.848943 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:12:56.849582 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:12:56.859518 systemd-logind[1524]: Removed session 21. Jul 2 00:12:58.924905 kubelet[2638]: E0702 00:12:58.923381 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:01.859858 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Jul 2 00:13:01.895825 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:13:01.897401 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:01.904925 systemd-logind[1524]: New session 22 of user core. Jul 2 00:13:01.910890 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:13:02.048333 sshd[4261]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:02.056003 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:57444.service - OpenSSH per-connection server daemon (10.0.0.1:57444). Jul 2 00:13:02.056428 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:57442.service: Deactivated successfully. Jul 2 00:13:02.063887 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:13:02.066667 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:13:02.068694 systemd-logind[1524]: Removed session 22. Jul 2 00:13:02.096603 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 57444 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:13:02.098371 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:02.103263 systemd-logind[1524]: New session 23 of user core. Jul 2 00:13:02.114870 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:13:04.525606 containerd[1546]: time="2024-07-02T00:13:04.524202689Z" level=info msg="StopContainer for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" with timeout 30 (s)" Jul 2 00:13:04.526701 containerd[1546]: time="2024-07-02T00:13:04.526649483Z" level=info msg="Stop container \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" with signal terminated" Jul 2 00:13:04.555156 containerd[1546]: time="2024-07-02T00:13:04.555027211Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:13:04.566913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542-rootfs.mount: Deactivated successfully. Jul 2 00:13:04.576572 containerd[1546]: time="2024-07-02T00:13:04.576518500Z" level=info msg="StopContainer for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" with timeout 2 (s)" Jul 2 00:13:04.576906 containerd[1546]: time="2024-07-02T00:13:04.576807370Z" level=info msg="Stop container \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" with signal terminated" Jul 2 00:13:04.584992 systemd-networkd[1242]: lxc_health: Link DOWN Jul 2 00:13:04.585001 systemd-networkd[1242]: lxc_health: Lost carrier Jul 2 00:13:04.587606 containerd[1546]: time="2024-07-02T00:13:04.587495876Z" level=info msg="shim disconnected" id=14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542 namespace=k8s.io Jul 2 00:13:04.587720 containerd[1546]: time="2024-07-02T00:13:04.587615712Z" level=warning msg="cleaning up after shim disconnected" id=14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542 namespace=k8s.io Jul 2 00:13:04.587720 containerd[1546]: time="2024-07-02T00:13:04.587630671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:04.604948 containerd[1546]: time="2024-07-02T00:13:04.604896028Z" level=info msg="StopContainer for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" returns successfully" Jul 2 00:13:04.607615 containerd[1546]: time="2024-07-02T00:13:04.607571014Z" level=info msg="StopPodSandbox for \"20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220\"" Jul 2 00:13:04.607830 containerd[1546]: time="2024-07-02T00:13:04.607764047Z" level=info msg="Container to stop \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:13:04.609798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220-shm.mount: Deactivated successfully. Jul 2 00:13:04.632347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72-rootfs.mount: Deactivated successfully. Jul 2 00:13:04.641503 containerd[1546]: time="2024-07-02T00:13:04.641410471Z" level=info msg="shim disconnected" id=8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72 namespace=k8s.io Jul 2 00:13:04.641503 containerd[1546]: time="2024-07-02T00:13:04.641496348Z" level=warning msg="cleaning up after shim disconnected" id=8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72 namespace=k8s.io Jul 2 00:13:04.641762 containerd[1546]: time="2024-07-02T00:13:04.641509228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:04.646825 containerd[1546]: time="2024-07-02T00:13:04.646761444Z" level=info msg="shim disconnected" id=20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220 namespace=k8s.io Jul 2 00:13:04.646825 containerd[1546]: time="2024-07-02T00:13:04.646821162Z" level=warning msg="cleaning up after shim disconnected" id=20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220 namespace=k8s.io Jul 2 00:13:04.646825 containerd[1546]: time="2024-07-02T00:13:04.646829962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:04.659490 containerd[1546]: time="2024-07-02T00:13:04.659440041Z" level=info msg="StopContainer for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" returns successfully" Jul 2 00:13:04.659915 containerd[1546]: time="2024-07-02T00:13:04.659886105Z" level=info msg="TearDown network for sandbox \"20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220\" successfully" Jul 2 00:13:04.659953 containerd[1546]: time="2024-07-02T00:13:04.659914984Z" level=info msg="StopPodSandbox for \"20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220\" returns successfully" Jul 2 00:13:04.659953 containerd[1546]: time="2024-07-02T00:13:04.659927864Z" level=info msg="StopPodSandbox for \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\"" Jul 2 00:13:04.659996 containerd[1546]: time="2024-07-02T00:13:04.659962262Z" level=info msg="Container to stop \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:13:04.660027 containerd[1546]: time="2024-07-02T00:13:04.659995581Z" level=info msg="Container to stop \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:13:04.660027 containerd[1546]: time="2024-07-02T00:13:04.660005821Z" level=info msg="Container to stop \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:13:04.660027 containerd[1546]: time="2024-07-02T00:13:04.660015181Z" level=info msg="Container to stop \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:13:04.660027 containerd[1546]: time="2024-07-02T00:13:04.660023820Z" level=info msg="Container to stop \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:13:04.693569 containerd[1546]: time="2024-07-02T00:13:04.692463286Z" level=info msg="shim disconnected" id=a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a namespace=k8s.io Jul 2 00:13:04.693569 containerd[1546]: time="2024-07-02T00:13:04.692929110Z" level=warning msg="cleaning up after shim disconnected" id=a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a namespace=k8s.io Jul 2 00:13:04.693569 containerd[1546]: time="2024-07-02T00:13:04.692941430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:04.705123 containerd[1546]: time="2024-07-02T00:13:04.705067686Z" level=info msg="TearDown network for sandbox \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" successfully" Jul 2 00:13:04.705123 containerd[1546]: time="2024-07-02T00:13:04.705105524Z" level=info msg="StopPodSandbox for \"a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a\" returns successfully" Jul 2 00:13:04.734339 kubelet[2638]: I0702 00:13:04.734304 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-hubble-tls\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.734339 kubelet[2638]: I0702 00:13:04.734350 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb8fr\" (UniqueName: \"kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-kube-api-access-zb8fr\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737005 kubelet[2638]: I0702 00:13:04.734372 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-kernel\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737005 kubelet[2638]: I0702 00:13:04.734391 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-cgroup\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737005 kubelet[2638]: I0702 00:13:04.734418 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-cilium-config-path\") pod \"017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e\" (UID: \"017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e\") " Jul 2 00:13:04.737005 kubelet[2638]: I0702 00:13:04.734437 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cni-path\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737005 kubelet[2638]: I0702 00:13:04.734427 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737005 kubelet[2638]: I0702 00:13:04.734453 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-run\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737162 kubelet[2638]: I0702 00:13:04.734513 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-etc-cni-netd\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737162 kubelet[2638]: I0702 00:13:04.734529 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-xtables-lock\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737162 kubelet[2638]: I0702 00:13:04.734550 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea1ad5ef-1ff9-4520-9401-86beb135399d-clustermesh-secrets\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737162 kubelet[2638]: I0702 00:13:04.734587 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-bpf-maps\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737162 kubelet[2638]: I0702 00:13:04.734606 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b4ld\" (UniqueName: \"kubernetes.io/projected/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-kube-api-access-2b4ld\") pod \"017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e\" (UID: \"017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e\") " Jul 2 00:13:04.737162 kubelet[2638]: I0702 00:13:04.734628 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-config-path\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737303 kubelet[2638]: I0702 00:13:04.734647 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-lib-modules\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737303 kubelet[2638]: I0702 00:13:04.734664 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-hostproc\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737303 kubelet[2638]: I0702 00:13:04.734682 2638 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-net\") pod \"ea1ad5ef-1ff9-4520-9401-86beb135399d\" (UID: \"ea1ad5ef-1ff9-4520-9401-86beb135399d\") " Jul 2 00:13:04.737303 kubelet[2638]: I0702 00:13:04.734725 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737303 kubelet[2638]: I0702 00:13:04.734750 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cni-path" (OuterVolumeSpecName: "cni-path") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737432 kubelet[2638]: I0702 00:13:04.734765 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737432 kubelet[2638]: I0702 00:13:04.734779 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737432 kubelet[2638]: I0702 00:13:04.734795 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737432 kubelet[2638]: I0702 00:13:04.735271 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737432 kubelet[2638]: I0702 00:13:04.735311 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737548 kubelet[2638]: I0702 00:13:04.735327 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-hostproc" (OuterVolumeSpecName: "hostproc") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737548 kubelet[2638]: I0702 00:13:04.735344 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:13:04.737548 kubelet[2638]: I0702 00:13:04.736479 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e" (UID: "017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:13:04.737548 kubelet[2638]: I0702 00:13:04.737221 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:13:04.737958 kubelet[2638]: I0702 00:13:04.737912 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-kube-api-access-2b4ld" (OuterVolumeSpecName: "kube-api-access-2b4ld") pod "017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e" (UID: "017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e"). InnerVolumeSpecName "kube-api-access-2b4ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:13:04.738164 kubelet[2638]: I0702 00:13:04.738121 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:13:04.739206 kubelet[2638]: I0702 00:13:04.739172 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-kube-api-access-zb8fr" (OuterVolumeSpecName: "kube-api-access-zb8fr") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "kube-api-access-zb8fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:13:04.740942 kubelet[2638]: I0702 00:13:04.740901 2638 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1ad5ef-1ff9-4520-9401-86beb135399d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ea1ad5ef-1ff9-4520-9401-86beb135399d" (UID: "ea1ad5ef-1ff9-4520-9401-86beb135399d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835173 2638 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835219 2638 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835231 2638 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835245 2638 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zb8fr\" (UniqueName: \"kubernetes.io/projected/ea1ad5ef-1ff9-4520-9401-86beb135399d-kube-api-access-zb8fr\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835257 2638 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835266 2638 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835276 2638 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835303 kubelet[2638]: I0702 00:13:04.835284 2638 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835293 2638 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835301 2638 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835311 2638 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2b4ld\" (UniqueName: \"kubernetes.io/projected/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e-kube-api-access-2b4ld\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835321 2638 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835330 2638 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea1ad5ef-1ff9-4520-9401-86beb135399d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835338 2638 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835348 2638 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea1ad5ef-1ff9-4520-9401-86beb135399d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.835622 kubelet[2638]: I0702 00:13:04.835356 2638 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea1ad5ef-1ff9-4520-9401-86beb135399d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:13:04.922381 kubelet[2638]: E0702 00:13:04.922343 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:05.163926 kubelet[2638]: I0702 00:13:05.163823 2638 scope.go:117] "RemoveContainer" containerID="8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72" Jul 2 00:13:05.166794 containerd[1546]: time="2024-07-02T00:13:05.166463034Z" level=info msg="RemoveContainer for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\"" Jul 2 00:13:05.171134 containerd[1546]: time="2024-07-02T00:13:05.171092403Z" level=info msg="RemoveContainer for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" returns successfully" Jul 2 00:13:05.171528 kubelet[2638]: I0702 00:13:05.171481 2638 scope.go:117] "RemoveContainer" containerID="1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16" Jul 2 00:13:05.172628 containerd[1546]: time="2024-07-02T00:13:05.172592554Z" level=info msg="RemoveContainer for \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\"" Jul 2 00:13:05.176039 containerd[1546]: time="2024-07-02T00:13:05.176001883Z" level=info msg="RemoveContainer for \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\" returns successfully" Jul 2 00:13:05.176330 kubelet[2638]: I0702 00:13:05.176197 2638 scope.go:117] "RemoveContainer" containerID="7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337" Jul 2 00:13:05.178232 containerd[1546]: time="2024-07-02T00:13:05.178188132Z" level=info msg="RemoveContainer for \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\"" Jul 2 00:13:05.182593 containerd[1546]: time="2024-07-02T00:13:05.182138163Z" level=info msg="RemoveContainer for \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\" returns successfully" Jul 2 00:13:05.183654 kubelet[2638]: I0702 00:13:05.183621 2638 scope.go:117] "RemoveContainer" containerID="024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388" Jul 2 00:13:05.184840 containerd[1546]: time="2024-07-02T00:13:05.184776878Z" level=info msg="RemoveContainer for \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\"" Jul 2 00:13:05.188575 containerd[1546]: time="2024-07-02T00:13:05.187595666Z" level=info msg="RemoveContainer for \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\" returns successfully" Jul 2 00:13:05.189333 kubelet[2638]: I0702 00:13:05.189303 2638 scope.go:117] "RemoveContainer" containerID="c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e" Jul 2 00:13:05.191527 containerd[1546]: time="2024-07-02T00:13:05.191486859Z" level=info msg="RemoveContainer for \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\"" Jul 2 00:13:05.198306 containerd[1546]: time="2024-07-02T00:13:05.198258999Z" level=info msg="RemoveContainer for \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\" returns successfully" Jul 2 00:13:05.198535 kubelet[2638]: I0702 00:13:05.198500 2638 scope.go:117] "RemoveContainer" containerID="8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72" Jul 2 00:13:05.206057 containerd[1546]: time="2024-07-02T00:13:05.198743703Z" level=error msg="ContainerStatus for \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\": not found" Jul 2 00:13:05.210406 kubelet[2638]: E0702 00:13:05.210178 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\": not found" containerID="8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72" Jul 2 00:13:05.210406 kubelet[2638]: I0702 00:13:05.210291 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72"} err="failed to get container status \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\": rpc error: code = NotFound desc = an error occurred when try to find container \"8acf47901e0f7df0fd2c2a1a3b89b0edd09b0d73e1c96eb56f4028449b81ca72\": not found" Jul 2 00:13:05.210406 kubelet[2638]: I0702 00:13:05.210306 2638 scope.go:117] "RemoveContainer" containerID="1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16" Jul 2 00:13:05.210829 containerd[1546]: time="2024-07-02T00:13:05.210773991Z" level=error msg="ContainerStatus for \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\": not found" Jul 2 00:13:05.211095 kubelet[2638]: E0702 00:13:05.210933 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\": not found" containerID="1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16" Jul 2 00:13:05.211095 kubelet[2638]: I0702 00:13:05.210958 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16"} err="failed to get container status \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e1036f0db89d7f4a7c999077bbf886c610e2f3738354c99ccbfd6a179e65a16\": not found" Jul 2 00:13:05.211095 kubelet[2638]: I0702 00:13:05.210968 2638 scope.go:117] "RemoveContainer" containerID="7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337" Jul 2 00:13:05.211172 containerd[1546]: time="2024-07-02T00:13:05.211114540Z" level=error msg="ContainerStatus for \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\": not found" Jul 2 00:13:05.211434 kubelet[2638]: E0702 00:13:05.211299 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\": not found" containerID="7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337" Jul 2 00:13:05.211434 kubelet[2638]: I0702 00:13:05.211361 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337"} err="failed to get container status \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\": rpc error: code = NotFound desc = an error occurred when try to find container \"7da4d48e9bae8f62235a728fc8fa3650266815ecb2c1c0a55d84872b50f65337\": not found" Jul 2 00:13:05.211434 kubelet[2638]: I0702 00:13:05.211373 2638 scope.go:117] "RemoveContainer" containerID="024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388" Jul 2 00:13:05.211701 kubelet[2638]: E0702 00:13:05.211639 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\": not found" containerID="024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388" Jul 2 00:13:05.211701 kubelet[2638]: I0702 00:13:05.211674 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388"} err="failed to get container status \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\": rpc error: code = NotFound desc = an error occurred when try to find container \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\": not found" Jul 2 00:13:05.211701 kubelet[2638]: I0702 00:13:05.211686 2638 scope.go:117] "RemoveContainer" containerID="c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e" Jul 2 00:13:05.212204 containerd[1546]: time="2024-07-02T00:13:05.211509247Z" level=error msg="ContainerStatus for \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"024962babb6d06d56301fbebbdf32b83d7cde18458bcc82a4ede2af3f20eb388\": not found" Jul 2 00:13:05.212204 containerd[1546]: time="2024-07-02T00:13:05.211852916Z" level=error msg="ContainerStatus for \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\": not found" Jul 2 00:13:05.212265 kubelet[2638]: E0702 00:13:05.212012 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\": not found" containerID="c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e" Jul 2 00:13:05.212265 kubelet[2638]: I0702 00:13:05.212039 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e"} err="failed to get container status \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c56a3764139d1ecbd1570e82eb28b329591c507c6df6c991c897555ea55f239e\": not found" Jul 2 00:13:05.212265 kubelet[2638]: I0702 00:13:05.212049 2638 scope.go:117] "RemoveContainer" containerID="14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542" Jul 2 00:13:05.213105 containerd[1546]: time="2024-07-02T00:13:05.213077516Z" level=info msg="RemoveContainer for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\"" Jul 2 00:13:05.216132 containerd[1546]: time="2024-07-02T00:13:05.216088378Z" level=info msg="RemoveContainer for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" returns successfully" Jul 2 00:13:05.216463 kubelet[2638]: I0702 00:13:05.216392 2638 scope.go:117] "RemoveContainer" containerID="14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542" Jul 2 00:13:05.216847 containerd[1546]: time="2024-07-02T00:13:05.216782396Z" level=error msg="ContainerStatus for \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\": not found" Jul 2 00:13:05.216997 kubelet[2638]: E0702 00:13:05.216979 2638 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\": not found" containerID="14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542" Jul 2 00:13:05.217078 kubelet[2638]: I0702 00:13:05.217063 2638 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542"} err="failed to get container status \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\": rpc error: code = NotFound desc = an error occurred when try to find container \"14b13a3a53a288e1565765769d56c4b39f94ee9d4001f6d07dfda5a24e688542\": not found" Jul 2 00:13:05.536215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a-rootfs.mount: Deactivated successfully. Jul 2 00:13:05.536364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20311bfa7b94f35391600cafae664c6cca31da5b4fa815985cbaf03cf190a220-rootfs.mount: Deactivated successfully. Jul 2 00:13:05.536450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6782d0c7950af08acc5ffc3fe9fe8432826d4311dfe2188db3184cec148fb8a-shm.mount: Deactivated successfully. Jul 2 00:13:05.536546 systemd[1]: var-lib-kubelet-pods-017c11c3\x2dcf40\x2d4ef0\x2da11c\x2d27f1e7e6dd7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2b4ld.mount: Deactivated successfully. Jul 2 00:13:05.536666 systemd[1]: var-lib-kubelet-pods-ea1ad5ef\x2d1ff9\x2d4520\x2d9401\x2d86beb135399d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzb8fr.mount: Deactivated successfully. Jul 2 00:13:05.536751 systemd[1]: var-lib-kubelet-pods-ea1ad5ef\x2d1ff9\x2d4520\x2d9401\x2d86beb135399d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:13:05.536830 systemd[1]: var-lib-kubelet-pods-ea1ad5ef\x2d1ff9\x2d4520\x2d9401\x2d86beb135399d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:13:06.467846 sshd[4273]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:06.478853 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:57454.service - OpenSSH per-connection server daemon (10.0.0.1:57454). Jul 2 00:13:06.479267 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:57444.service: Deactivated successfully. Jul 2 00:13:06.483478 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:13:06.485248 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:13:06.486426 systemd-logind[1524]: Removed session 23. Jul 2 00:13:06.514595 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 57454 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:13:06.515926 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:06.521297 systemd-logind[1524]: New session 24 of user core. Jul 2 00:13:06.528961 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:13:06.923848 kubelet[2638]: I0702 00:13:06.923787 2638 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e" path="/var/lib/kubelet/pods/017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e/volumes" Jul 2 00:13:06.924215 kubelet[2638]: I0702 00:13:06.924192 2638 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" path="/var/lib/kubelet/pods/ea1ad5ef-1ff9-4520-9401-86beb135399d/volumes" Jul 2 00:13:07.211711 sshd[4441]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:07.221028 systemd[1]: Started sshd@24-10.0.0.82:22-10.0.0.1:57468.service - OpenSSH per-connection server daemon (10.0.0.1:57468). Jul 2 00:13:07.222809 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:57454.service: Deactivated successfully. Jul 2 00:13:07.229499 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:13:07.230529 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:13:07.232817 kubelet[2638]: I0702 00:13:07.231099 2638 topology_manager.go:215] "Topology Admit Handler" podUID="a813280f-def6-4c44-b826-5207969654c0" podNamespace="kube-system" podName="cilium-2l4rm" Jul 2 00:13:07.232817 kubelet[2638]: E0702 00:13:07.231155 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" containerName="mount-cgroup" Jul 2 00:13:07.232817 kubelet[2638]: E0702 00:13:07.231165 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" containerName="mount-bpf-fs" Jul 2 00:13:07.232817 kubelet[2638]: E0702 00:13:07.231174 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" containerName="cilium-agent" Jul 2 00:13:07.232817 kubelet[2638]: E0702 00:13:07.231181 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" containerName="clean-cilium-state" Jul 2 00:13:07.232817 kubelet[2638]: E0702 00:13:07.231188 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e" containerName="cilium-operator" Jul 2 00:13:07.232817 kubelet[2638]: E0702 00:13:07.231196 2638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" containerName="apply-sysctl-overwrites" Jul 2 00:13:07.232817 kubelet[2638]: I0702 00:13:07.231221 2638 memory_manager.go:346] "RemoveStaleState removing state" podUID="017c11c3-cf40-4ef0-a11c-27f1e7e6dd7e" containerName="cilium-operator" Jul 2 00:13:07.232817 kubelet[2638]: I0702 00:13:07.231227 2638 memory_manager.go:346] "RemoveStaleState removing state" podUID="ea1ad5ef-1ff9-4520-9401-86beb135399d" containerName="cilium-agent" Jul 2 00:13:07.237000 systemd-logind[1524]: Removed session 24. Jul 2 00:13:07.241277 kubelet[2638]: W0702 00:13:07.240150 2638 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:13:07.241277 kubelet[2638]: E0702 00:13:07.240190 2638 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:13:07.241277 kubelet[2638]: W0702 00:13:07.240233 2638 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:13:07.241277 kubelet[2638]: E0702 00:13:07.240242 2638 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:13:07.241277 kubelet[2638]: W0702 00:13:07.240272 2638 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:13:07.245317 kubelet[2638]: E0702 00:13:07.240281 2638 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:13:07.245317 kubelet[2638]: I0702 00:13:07.243902 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-cni-path\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245317 kubelet[2638]: I0702 00:13:07.243944 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-etc-cni-netd\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245317 kubelet[2638]: I0702 00:13:07.243966 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a813280f-def6-4c44-b826-5207969654c0-clustermesh-secrets\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245317 kubelet[2638]: I0702 00:13:07.243987 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-host-proc-sys-net\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245317 kubelet[2638]: I0702 00:13:07.244007 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-cilium-cgroup\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245542 kubelet[2638]: I0702 00:13:07.244026 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a813280f-def6-4c44-b826-5207969654c0-cilium-config-path\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245542 kubelet[2638]: I0702 00:13:07.244045 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a813280f-def6-4c44-b826-5207969654c0-cilium-ipsec-secrets\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245542 kubelet[2638]: I0702 00:13:07.244067 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8fn\" (UniqueName: \"kubernetes.io/projected/a813280f-def6-4c44-b826-5207969654c0-kube-api-access-gk8fn\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245542 kubelet[2638]: I0702 00:13:07.244086 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-bpf-maps\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245542 kubelet[2638]: I0702 00:13:07.244106 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-host-proc-sys-kernel\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245697 kubelet[2638]: I0702 00:13:07.244172 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a813280f-def6-4c44-b826-5207969654c0-hubble-tls\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245697 kubelet[2638]: I0702 00:13:07.244207 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-cilium-run\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245697 kubelet[2638]: I0702 00:13:07.244227 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-hostproc\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245697 kubelet[2638]: I0702 00:13:07.244294 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-lib-modules\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.245697 kubelet[2638]: I0702 00:13:07.244321 2638 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a813280f-def6-4c44-b826-5207969654c0-xtables-lock\") pod \"cilium-2l4rm\" (UID: \"a813280f-def6-4c44-b826-5207969654c0\") " pod="kube-system/cilium-2l4rm" Jul 2 00:13:07.279466 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 57468 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:13:07.280882 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:07.285379 systemd-logind[1524]: New session 25 of user core. Jul 2 00:13:07.291884 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:13:07.341107 sshd[4455]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:07.352940 systemd[1]: Started sshd@25-10.0.0.82:22-10.0.0.1:57472.service - OpenSSH per-connection server daemon (10.0.0.1:57472). Jul 2 00:13:07.353402 systemd[1]: sshd@24-10.0.0.82:22-10.0.0.1:57468.service: Deactivated successfully. Jul 2 00:13:07.361607 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:13:07.364218 systemd-logind[1524]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:13:07.365595 systemd-logind[1524]: Removed session 25. Jul 2 00:13:07.389930 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 57472 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:13:07.391753 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:07.395987 systemd-logind[1524]: New session 26 of user core. Jul 2 00:13:07.402855 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:13:08.008939 kubelet[2638]: E0702 00:13:08.008880 2638 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:13:08.347318 kubelet[2638]: E0702 00:13:08.347272 2638 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 00:13:08.347318 kubelet[2638]: E0702 00:13:08.347310 2638 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-2l4rm: failed to sync secret cache: timed out waiting for the condition Jul 2 00:13:08.347476 kubelet[2638]: E0702 00:13:08.347381 2638 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a813280f-def6-4c44-b826-5207969654c0-hubble-tls podName:a813280f-def6-4c44-b826-5207969654c0 nodeName:}" failed. No retries permitted until 2024-07-02 00:13:08.847356964 +0000 UTC m=+86.028843940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/a813280f-def6-4c44-b826-5207969654c0-hubble-tls") pod "cilium-2l4rm" (UID: "a813280f-def6-4c44-b826-5207969654c0") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:13:08.921773 kubelet[2638]: E0702 00:13:08.921727 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:09.040753 kubelet[2638]: E0702 00:13:09.040711 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:09.041749 containerd[1546]: time="2024-07-02T00:13:09.041695733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2l4rm,Uid:a813280f-def6-4c44-b826-5207969654c0,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:09.065238 containerd[1546]: time="2024-07-02T00:13:09.065134339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:09.065238 containerd[1546]: time="2024-07-02T00:13:09.065189338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:09.065238 containerd[1546]: time="2024-07-02T00:13:09.065222697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:09.065238 containerd[1546]: time="2024-07-02T00:13:09.065233657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:09.107259 containerd[1546]: time="2024-07-02T00:13:09.107220665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2l4rm,Uid:a813280f-def6-4c44-b826-5207969654c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\"" Jul 2 00:13:09.108135 kubelet[2638]: E0702 00:13:09.108111 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:09.110896 containerd[1546]: time="2024-07-02T00:13:09.110773141Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:13:09.125411 containerd[1546]: time="2024-07-02T00:13:09.125284438Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b1b47f7478d77d1241d2490523c0bb27c1ff03cd495fd5f4fddfe29d3424c53\"" Jul 2 00:13:09.128179 containerd[1546]: time="2024-07-02T00:13:09.125890424Z" level=info msg="StartContainer for \"5b1b47f7478d77d1241d2490523c0bb27c1ff03cd495fd5f4fddfe29d3424c53\"" Jul 2 00:13:09.171380 containerd[1546]: time="2024-07-02T00:13:09.171325991Z" level=info msg="StartContainer for \"5b1b47f7478d77d1241d2490523c0bb27c1ff03cd495fd5f4fddfe29d3424c53\" returns successfully" Jul 2 00:13:09.180464 kubelet[2638]: E0702 00:13:09.180362 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:09.221444 containerd[1546]: time="2024-07-02T00:13:09.221383408Z" level=info msg="shim disconnected" id=5b1b47f7478d77d1241d2490523c0bb27c1ff03cd495fd5f4fddfe29d3424c53 namespace=k8s.io Jul 2 00:13:09.221444 containerd[1546]: time="2024-07-02T00:13:09.221439567Z" level=warning msg="cleaning up after shim disconnected" id=5b1b47f7478d77d1241d2490523c0bb27c1ff03cd495fd5f4fddfe29d3424c53 namespace=k8s.io Jul 2 00:13:09.221444 containerd[1546]: time="2024-07-02T00:13:09.221450767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:10.183269 kubelet[2638]: E0702 00:13:10.182848 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:10.186255 containerd[1546]: time="2024-07-02T00:13:10.186193272Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:13:10.197769 containerd[1546]: time="2024-07-02T00:13:10.197714864Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aabea29dea6daa2de64d8432715b647b5190200952323c140ada50e37e65e05d\"" Jul 2 00:13:10.199934 containerd[1546]: time="2024-07-02T00:13:10.198441648Z" level=info msg="StartContainer for \"aabea29dea6daa2de64d8432715b647b5190200952323c140ada50e37e65e05d\"" Jul 2 00:13:10.245211 containerd[1546]: time="2024-07-02T00:13:10.245169241Z" level=info msg="StartContainer for \"aabea29dea6daa2de64d8432715b647b5190200952323c140ada50e37e65e05d\" returns successfully" Jul 2 00:13:10.275519 containerd[1546]: time="2024-07-02T00:13:10.275442228Z" level=info msg="shim disconnected" id=aabea29dea6daa2de64d8432715b647b5190200952323c140ada50e37e65e05d namespace=k8s.io Jul 2 00:13:10.275519 containerd[1546]: time="2024-07-02T00:13:10.275507786Z" level=warning msg="cleaning up after shim disconnected" id=aabea29dea6daa2de64d8432715b647b5190200952323c140ada50e37e65e05d namespace=k8s.io Jul 2 00:13:10.275519 containerd[1546]: time="2024-07-02T00:13:10.275516106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:10.861096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aabea29dea6daa2de64d8432715b647b5190200952323c140ada50e37e65e05d-rootfs.mount: Deactivated successfully. Jul 2 00:13:11.187058 kubelet[2638]: E0702 00:13:11.186958 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:11.192320 containerd[1546]: time="2024-07-02T00:13:11.192274241Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:13:11.208454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518846729.mount: Deactivated successfully. Jul 2 00:13:11.210071 containerd[1546]: time="2024-07-02T00:13:11.210023454Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c7586fee4a42de15b435cbb2a89cf8e49d9c6ce033c25742e896843f3316968\"" Jul 2 00:13:11.211040 containerd[1546]: time="2024-07-02T00:13:11.210811638Z" level=info msg="StartContainer for \"0c7586fee4a42de15b435cbb2a89cf8e49d9c6ce033c25742e896843f3316968\"" Jul 2 00:13:11.258850 containerd[1546]: time="2024-07-02T00:13:11.258251550Z" level=info msg="StartContainer for \"0c7586fee4a42de15b435cbb2a89cf8e49d9c6ce033c25742e896843f3316968\" returns successfully" Jul 2 00:13:11.288079 containerd[1546]: time="2024-07-02T00:13:11.287820932Z" level=info msg="shim disconnected" id=0c7586fee4a42de15b435cbb2a89cf8e49d9c6ce033c25742e896843f3316968 namespace=k8s.io Jul 2 00:13:11.288079 containerd[1546]: time="2024-07-02T00:13:11.287877450Z" level=warning msg="cleaning up after shim disconnected" id=0c7586fee4a42de15b435cbb2a89cf8e49d9c6ce033c25742e896843f3316968 namespace=k8s.io Jul 2 00:13:11.288079 containerd[1546]: time="2024-07-02T00:13:11.287885610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:11.861160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c7586fee4a42de15b435cbb2a89cf8e49d9c6ce033c25742e896843f3316968-rootfs.mount: Deactivated successfully. Jul 2 00:13:12.190767 kubelet[2638]: E0702 00:13:12.190636 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:12.194596 containerd[1546]: time="2024-07-02T00:13:12.193749619Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:13:12.206897 containerd[1546]: time="2024-07-02T00:13:12.206581433Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7e91232047246533f78108a7cd4b31913b09276dead3ba12c50fc5d8db56decb\"" Jul 2 00:13:12.209379 containerd[1546]: time="2024-07-02T00:13:12.208510519Z" level=info msg="StartContainer for \"7e91232047246533f78108a7cd4b31913b09276dead3ba12c50fc5d8db56decb\"" Jul 2 00:13:12.264104 containerd[1546]: time="2024-07-02T00:13:12.264060019Z" level=info msg="StartContainer for \"7e91232047246533f78108a7cd4b31913b09276dead3ba12c50fc5d8db56decb\" returns successfully" Jul 2 00:13:12.287291 containerd[1546]: time="2024-07-02T00:13:12.287077813Z" level=info msg="shim disconnected" id=7e91232047246533f78108a7cd4b31913b09276dead3ba12c50fc5d8db56decb namespace=k8s.io Jul 2 00:13:12.287291 containerd[1546]: time="2024-07-02T00:13:12.287138532Z" level=warning msg="cleaning up after shim disconnected" id=7e91232047246533f78108a7cd4b31913b09276dead3ba12c50fc5d8db56decb namespace=k8s.io Jul 2 00:13:12.287291 containerd[1546]: time="2024-07-02T00:13:12.287146852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:12.861300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e91232047246533f78108a7cd4b31913b09276dead3ba12c50fc5d8db56decb-rootfs.mount: Deactivated successfully. Jul 2 00:13:13.009780 kubelet[2638]: E0702 00:13:13.009699 2638 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:13:13.199935 kubelet[2638]: E0702 00:13:13.199659 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:13.202981 containerd[1546]: time="2024-07-02T00:13:13.202885081Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:13:13.216264 containerd[1546]: time="2024-07-02T00:13:13.216196791Z" level=info msg="CreateContainer within sandbox \"00d53ec1fe447e2bdfb961cc6f013d2ffc4c533fd1aa6fb3a7296f169b62905a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"538976686ea2eb22c004830331d3f9fc83cf09e8db4972c32d7ceed1420d666d\"" Jul 2 00:13:13.216881 containerd[1546]: time="2024-07-02T00:13:13.216843061Z" level=info msg="StartContainer for \"538976686ea2eb22c004830331d3f9fc83cf09e8db4972c32d7ceed1420d666d\"" Jul 2 00:13:13.271011 containerd[1546]: time="2024-07-02T00:13:13.270954408Z" level=info msg="StartContainer for \"538976686ea2eb22c004830331d3f9fc83cf09e8db4972c32d7ceed1420d666d\" returns successfully" Jul 2 00:13:13.535701 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 2 00:13:13.861387 systemd[1]: run-containerd-runc-k8s.io-538976686ea2eb22c004830331d3f9fc83cf09e8db4972c32d7ceed1420d666d-runc.8l7V2I.mount: Deactivated successfully. Jul 2 00:13:14.204387 kubelet[2638]: E0702 00:13:14.204293 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:15.127059 kubelet[2638]: I0702 00:13:15.127027 2638 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:13:15Z","lastTransitionTime":"2024-07-02T00:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:13:15.210216 kubelet[2638]: E0702 00:13:15.210048 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:16.458654 systemd-networkd[1242]: lxc_health: Link UP Jul 2 00:13:16.463595 systemd-networkd[1242]: lxc_health: Gained carrier Jul 2 00:13:17.045742 kubelet[2638]: E0702 00:13:17.044142 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:17.060827 kubelet[2638]: I0702 00:13:17.060365 2638 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2l4rm" podStartSLOduration=10.060325955 podCreationTimestamp="2024-07-02 00:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:13:14.21984009 +0000 UTC m=+91.401327106" watchObservedRunningTime="2024-07-02 00:13:17.060325955 +0000 UTC m=+94.241812971" Jul 2 00:13:17.213062 kubelet[2638]: E0702 00:13:17.213034 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:18.217034 kubelet[2638]: E0702 00:13:18.216963 2638 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:18.295744 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jul 2 00:13:22.253315 sshd[4465]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:22.256388 systemd[1]: sshd@25-10.0.0.82:22-10.0.0.1:57472.service: Deactivated successfully. Jul 2 00:13:22.259173 systemd-logind[1524]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:13:22.259340 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:13:22.261870 systemd-logind[1524]: Removed session 26.