Jun 25 18:47:22.904798 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:47:22.904819 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:22.904830 kernel: BIOS-provided physical RAM map: Jun 25 18:47:22.904836 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:47:22.904843 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jun 25 18:47:22.904849 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jun 25 18:47:22.904856 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jun 25 18:47:22.904862 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jun 25 18:47:22.904868 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jun 25 18:47:22.904875 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jun 25 18:47:22.904883 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jun 25 18:47:22.904889 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jun 25 18:47:22.904895 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jun 25 18:47:22.904902 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jun 25 18:47:22.904910 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jun 25 18:47:22.904919 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jun 25 18:47:22.904926 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jun 25 18:47:22.904932 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jun 25 18:47:22.904939 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jun 25 18:47:22.904946 kernel: NX (Execute Disable) protection: active Jun 25 18:47:22.904953 kernel: APIC: Static calls initialized Jun 25 18:47:22.904959 kernel: efi: EFI v2.7 by EDK II Jun 25 18:47:22.904966 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4f9018 Jun 25 18:47:22.904973 kernel: SMBIOS 2.8 present. Jun 25 18:47:22.904980 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jun 25 18:47:22.904986 kernel: Hypervisor detected: KVM Jun 25 18:47:22.904993 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:47:22.905002 kernel: kvm-clock: using sched offset of 4094121452 cycles Jun 25 18:47:22.905009 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:47:22.905016 kernel: tsc: Detected 2794.750 MHz processor Jun 25 18:47:22.905023 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:47:22.905031 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:47:22.905038 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jun 25 18:47:22.905045 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:47:22.905052 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:47:22.905058 kernel: Using GB pages for direct mapping Jun 25 18:47:22.905067 kernel: Secure boot disabled Jun 25 18:47:22.905074 kernel: ACPI: Early table checksum verification disabled Jun 25 18:47:22.905081 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jun 25 18:47:22.905088 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jun 25 18:47:22.905099 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:22.905106 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:22.905115 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jun 25 18:47:22.905122 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:22.905129 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:22.905137 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:47:22.905144 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 25 18:47:22.905151 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jun 25 18:47:22.905158 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jun 25 18:47:22.905165 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jun 25 18:47:22.905174 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jun 25 18:47:22.905181 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jun 25 18:47:22.905189 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jun 25 18:47:22.905196 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jun 25 18:47:22.905203 kernel: No NUMA configuration found Jun 25 18:47:22.905210 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jun 25 18:47:22.905217 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jun 25 18:47:22.905224 kernel: Zone ranges: Jun 25 18:47:22.905231 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:47:22.905241 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jun 25 18:47:22.905255 kernel: Normal empty Jun 25 18:47:22.905262 kernel: Movable zone start for each node Jun 25 18:47:22.905270 kernel: Early memory node ranges Jun 25 18:47:22.905277 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:47:22.905284 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jun 25 18:47:22.905291 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jun 25 18:47:22.905298 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jun 25 18:47:22.905305 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jun 25 18:47:22.905312 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jun 25 18:47:22.905322 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jun 25 18:47:22.905329 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:47:22.905336 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:47:22.905343 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jun 25 18:47:22.905350 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:47:22.905357 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jun 25 18:47:22.905364 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 25 18:47:22.905372 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jun 25 18:47:22.905379 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 18:47:22.905388 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:47:22.905396 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:47:22.905403 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:47:22.905410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:47:22.905417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:47:22.905424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:47:22.905431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:47:22.905438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:47:22.905446 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:47:22.905455 kernel: TSC deadline timer available Jun 25 18:47:22.905462 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 18:47:22.905469 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:47:22.905476 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 18:47:22.905483 kernel: kvm-guest: setup PV sched yield Jun 25 18:47:22.905490 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jun 25 18:47:22.905497 kernel: Booting paravirtualized kernel on KVM Jun 25 18:47:22.905505 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:47:22.905512 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 18:47:22.905521 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jun 25 18:47:22.905528 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jun 25 18:47:22.905535 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 18:47:22.905542 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:47:22.905550 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:47:22.905559 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:22.905566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:47:22.905573 kernel: random: crng init done Jun 25 18:47:22.905581 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:47:22.905590 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:47:22.905597 kernel: Fallback order for Node 0: 0 Jun 25 18:47:22.905604 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jun 25 18:47:22.905611 kernel: Policy zone: DMA32 Jun 25 18:47:22.905619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:47:22.905626 kernel: Memory: 2388204K/2567000K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 178536K reserved, 0K cma-reserved) Jun 25 18:47:22.905634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:47:22.905641 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:47:22.905650 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:47:22.905657 kernel: Dynamic Preempt: voluntary Jun 25 18:47:22.905664 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:47:22.905672 kernel: rcu: RCU event tracing is enabled. Jun 25 18:47:22.905698 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:47:22.905714 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:47:22.905724 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:47:22.905731 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:47:22.905739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:47:22.905746 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:47:22.905754 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 18:47:22.905761 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:47:22.905771 kernel: Console: colour dummy device 80x25 Jun 25 18:47:22.905778 kernel: printk: console [ttyS0] enabled Jun 25 18:47:22.905786 kernel: ACPI: Core revision 20230628 Jun 25 18:47:22.905794 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 18:47:22.905801 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:47:22.905811 kernel: x2apic enabled Jun 25 18:47:22.905819 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:47:22.905826 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 25 18:47:22.905834 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 25 18:47:22.905841 kernel: kvm-guest: setup PV IPIs Jun 25 18:47:22.905849 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:47:22.905856 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:47:22.905864 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 18:47:22.905872 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 18:47:22.905881 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 18:47:22.905889 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 18:47:22.905896 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:47:22.905904 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:47:22.905911 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:47:22.905919 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:47:22.905926 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 18:47:22.905934 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 18:47:22.905941 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 18:47:22.905951 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 18:47:22.905959 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 25 18:47:22.905967 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 25 18:47:22.905975 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 25 18:47:22.905983 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:47:22.905990 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:47:22.905998 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:47:22.906005 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:47:22.906013 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 18:47:22.906023 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:47:22.906030 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:47:22.906037 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:47:22.906045 kernel: SELinux: Initializing. Jun 25 18:47:22.906053 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:47:22.906061 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:47:22.906068 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 18:47:22.906076 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:22.906085 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:22.906093 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:47:22.906100 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 18:47:22.906108 kernel: ... version: 0 Jun 25 18:47:22.906115 kernel: ... bit width: 48 Jun 25 18:47:22.906123 kernel: ... generic registers: 6 Jun 25 18:47:22.906130 kernel: ... value mask: 0000ffffffffffff Jun 25 18:47:22.906138 kernel: ... max period: 00007fffffffffff Jun 25 18:47:22.906145 kernel: ... fixed-purpose events: 0 Jun 25 18:47:22.906154 kernel: ... event mask: 000000000000003f Jun 25 18:47:22.906162 kernel: signal: max sigframe size: 1776 Jun 25 18:47:22.906169 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:47:22.906177 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:47:22.906185 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:47:22.906192 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:47:22.906199 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 18:47:22.906207 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:47:22.906214 kernel: smpboot: Max logical packages: 1 Jun 25 18:47:22.906222 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 18:47:22.906231 kernel: devtmpfs: initialized Jun 25 18:47:22.906239 kernel: x86/mm: Memory block size: 128MB Jun 25 18:47:22.906246 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jun 25 18:47:22.906266 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jun 25 18:47:22.906273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jun 25 18:47:22.906281 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jun 25 18:47:22.906289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jun 25 18:47:22.906296 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:47:22.906304 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:47:22.906314 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:47:22.906321 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:47:22.906329 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:47:22.906336 kernel: audit: type=2000 audit(1719341242.490:1): state=initialized audit_enabled=0 res=1 Jun 25 18:47:22.906343 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:47:22.906351 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:47:22.906358 kernel: cpuidle: using governor menu Jun 25 18:47:22.906366 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:47:22.906375 kernel: dca service started, version 1.12.1 Jun 25 18:47:22.906383 kernel: PCI: Using configuration type 1 for base access Jun 25 18:47:22.906390 kernel: PCI: Using configuration type 1 for extended access Jun 25 18:47:22.906398 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:47:22.906406 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:47:22.906413 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:47:22.906421 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:47:22.906428 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:47:22.906435 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:47:22.906445 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:47:22.906453 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:47:22.906460 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:47:22.906467 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:47:22.906475 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:47:22.906482 kernel: ACPI: Interpreter enabled Jun 25 18:47:22.906490 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:47:22.906497 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:47:22.906505 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:47:22.906512 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:47:22.906522 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:47:22.906529 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:47:22.906728 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:47:22.906741 kernel: acpiphp: Slot [3] registered Jun 25 18:47:22.906749 kernel: acpiphp: Slot [4] registered Jun 25 18:47:22.906756 kernel: acpiphp: Slot [5] registered Jun 25 18:47:22.906764 kernel: acpiphp: Slot [6] registered Jun 25 18:47:22.906771 kernel: acpiphp: Slot [7] registered Jun 25 18:47:22.906782 kernel: acpiphp: Slot [8] registered Jun 25 18:47:22.906789 kernel: acpiphp: Slot [9] registered Jun 25 18:47:22.906796 kernel: acpiphp: Slot [10] registered Jun 25 18:47:22.906804 kernel: acpiphp: Slot [11] registered Jun 25 18:47:22.906811 kernel: acpiphp: Slot [12] registered Jun 25 18:47:22.906819 kernel: acpiphp: Slot [13] registered Jun 25 18:47:22.906826 kernel: acpiphp: Slot [14] registered Jun 25 18:47:22.906833 kernel: acpiphp: Slot [15] registered Jun 25 18:47:22.906841 kernel: acpiphp: Slot [16] registered Jun 25 18:47:22.906850 kernel: acpiphp: Slot [17] registered Jun 25 18:47:22.906857 kernel: acpiphp: Slot [18] registered Jun 25 18:47:22.906865 kernel: acpiphp: Slot [19] registered Jun 25 18:47:22.906872 kernel: acpiphp: Slot [20] registered Jun 25 18:47:22.906879 kernel: acpiphp: Slot [21] registered Jun 25 18:47:22.906887 kernel: acpiphp: Slot [22] registered Jun 25 18:47:22.906894 kernel: acpiphp: Slot [23] registered Jun 25 18:47:22.906901 kernel: acpiphp: Slot [24] registered Jun 25 18:47:22.906909 kernel: acpiphp: Slot [25] registered Jun 25 18:47:22.906916 kernel: acpiphp: Slot [26] registered Jun 25 18:47:22.906926 kernel: acpiphp: Slot [27] registered Jun 25 18:47:22.906933 kernel: acpiphp: Slot [28] registered Jun 25 18:47:22.906940 kernel: acpiphp: Slot [29] registered Jun 25 18:47:22.906948 kernel: acpiphp: Slot [30] registered Jun 25 18:47:22.906955 kernel: acpiphp: Slot [31] registered Jun 25 18:47:22.906962 kernel: PCI host bridge to bus 0000:00 Jun 25 18:47:22.907095 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:47:22.907207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:47:22.907331 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:47:22.907444 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 18:47:22.907556 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jun 25 18:47:22.907666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:47:22.907821 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:47:22.907958 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:47:22.908091 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:47:22.908211 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 18:47:22.908340 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:47:22.908460 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:47:22.908579 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:47:22.908714 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:47:22.908849 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:47:22.908977 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 18:47:22.909099 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 18:47:22.909229 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 18:47:22.909360 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jun 25 18:47:22.909482 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jun 25 18:47:22.909630 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jun 25 18:47:22.909779 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jun 25 18:47:22.909905 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:47:22.910035 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:47:22.910158 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 18:47:22.910292 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jun 25 18:47:22.910414 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jun 25 18:47:22.910543 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:47:22.910670 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:47:22.910807 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jun 25 18:47:22.910928 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jun 25 18:47:22.911062 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:47:22.911183 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 18:47:22.911315 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jun 25 18:47:22.911440 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jun 25 18:47:22.911586 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jun 25 18:47:22.911601 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:47:22.911608 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:47:22.911616 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:47:22.911624 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:47:22.911631 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:47:22.911639 kernel: iommu: Default domain type: Translated Jun 25 18:47:22.911646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:47:22.911654 kernel: efivars: Registered efivars operations Jun 25 18:47:22.911661 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:47:22.911671 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:47:22.911690 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jun 25 18:47:22.911698 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jun 25 18:47:22.911706 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jun 25 18:47:22.911713 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jun 25 18:47:22.911836 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:47:22.911955 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:47:22.912074 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:47:22.912087 kernel: vgaarb: loaded Jun 25 18:47:22.912095 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 18:47:22.912102 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 18:47:22.912110 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:47:22.912118 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:47:22.912125 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:47:22.912133 kernel: pnp: PnP ACPI init Jun 25 18:47:22.912275 kernel: pnp 00:02: [dma 2] Jun 25 18:47:22.912290 kernel: pnp: PnP ACPI: found 6 devices Jun 25 18:47:22.912298 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:47:22.912305 kernel: NET: Registered PF_INET protocol family Jun 25 18:47:22.912313 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:47:22.912320 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:47:22.912328 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:47:22.912336 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:47:22.912344 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:47:22.912351 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:47:22.912361 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:47:22.912368 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:47:22.912376 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:47:22.912383 kernel: NET: Registered PF_XDP protocol family Jun 25 18:47:22.912509 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jun 25 18:47:22.912633 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jun 25 18:47:22.912783 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:47:22.912898 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:47:22.913022 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:47:22.913134 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 18:47:22.913245 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jun 25 18:47:22.913378 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:47:22.913500 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:47:22.913510 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:47:22.913539 kernel: Initialise system trusted keyrings Jun 25 18:47:22.913547 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:47:22.913558 kernel: Key type asymmetric registered Jun 25 18:47:22.913566 kernel: Asymmetric key parser 'x509' registered Jun 25 18:47:22.913573 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:47:22.913581 kernel: io scheduler mq-deadline registered Jun 25 18:47:22.913588 kernel: io scheduler kyber registered Jun 25 18:47:22.913596 kernel: io scheduler bfq registered Jun 25 18:47:22.913604 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:47:22.913612 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:47:22.913620 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 18:47:22.913630 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:47:22.913637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:47:22.913645 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:47:22.913653 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:47:22.913688 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:47:22.913699 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:47:22.913707 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:47:22.913838 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 18:47:22.913958 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 18:47:22.914079 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T18:47:22 UTC (1719341242) Jun 25 18:47:22.914194 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 18:47:22.914204 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:47:22.914212 kernel: efifb: probing for efifb Jun 25 18:47:22.914220 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jun 25 18:47:22.914228 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jun 25 18:47:22.914235 kernel: efifb: scrolling: redraw Jun 25 18:47:22.914243 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jun 25 18:47:22.914263 kernel: Console: switching to colour frame buffer device 100x37 Jun 25 18:47:22.914272 kernel: fb0: EFI VGA frame buffer device Jun 25 18:47:22.914282 kernel: pstore: Using crash dump compression: deflate Jun 25 18:47:22.914290 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:47:22.914298 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:47:22.914305 kernel: Segment Routing with IPv6 Jun 25 18:47:22.914313 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:47:22.914321 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:47:22.914329 kernel: Key type dns_resolver registered Jun 25 18:47:22.914339 kernel: IPI shorthand broadcast: enabled Jun 25 18:47:22.914347 kernel: sched_clock: Marking stable (743003060, 112423439)->(870175689, -14749190) Jun 25 18:47:22.914357 kernel: registered taskstats version 1 Jun 25 18:47:22.914365 kernel: Loading compiled-in X.509 certificates Jun 25 18:47:22.914373 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:47:22.914381 kernel: Key type .fscrypt registered Jun 25 18:47:22.914391 kernel: Key type fscrypt-provisioning registered Jun 25 18:47:22.914399 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:47:22.914407 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:47:22.914415 kernel: ima: No architecture policies found Jun 25 18:47:22.914423 kernel: clk: Disabling unused clocks Jun 25 18:47:22.914431 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:47:22.914439 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:47:22.914447 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:47:22.914455 kernel: Run /init as init process Jun 25 18:47:22.914466 kernel: with arguments: Jun 25 18:47:22.914473 kernel: /init Jun 25 18:47:22.914481 kernel: with environment: Jun 25 18:47:22.914489 kernel: HOME=/ Jun 25 18:47:22.914497 kernel: TERM=linux Jun 25 18:47:22.914505 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:47:22.914515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:47:22.914527 systemd[1]: Detected virtualization kvm. Jun 25 18:47:22.914535 systemd[1]: Detected architecture x86-64. Jun 25 18:47:22.914543 systemd[1]: Running in initrd. Jun 25 18:47:22.914552 systemd[1]: No hostname configured, using default hostname. Jun 25 18:47:22.914560 systemd[1]: Hostname set to . Jun 25 18:47:22.914569 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:47:22.914577 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:47:22.914586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:22.914596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:22.914605 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:47:22.914614 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:47:22.914622 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:47:22.914631 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:47:22.914641 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:47:22.914650 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:47:22.914660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:22.914669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:22.914729 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:47:22.914738 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:47:22.914746 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:47:22.914754 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:47:22.914763 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:47:22.914771 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:47:22.914782 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:47:22.914790 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:47:22.914799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:22.914807 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:22.914816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:22.914824 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:47:22.914832 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:47:22.914841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:47:22.914849 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:47:22.914860 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:47:22.914868 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:47:22.914876 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:47:22.914885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:22.914893 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:47:22.914902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:22.914913 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:47:22.914941 systemd-journald[192]: Collecting audit messages is disabled. Jun 25 18:47:22.914962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:47:22.914971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:22.914980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:22.914988 systemd-journald[192]: Journal started Jun 25 18:47:22.915006 systemd-journald[192]: Runtime Journal (/run/log/journal/1a73ac138775407baebd96e60a44bd3f) is 6.0M, max 48.3M, 42.3M free. Jun 25 18:47:22.918520 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:47:22.919030 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:47:22.920261 systemd-modules-load[194]: Inserted module 'overlay' Jun 25 18:47:22.926184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:47:22.929837 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:47:22.935033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:22.937822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:47:22.940528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:22.950896 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:22.953894 dracut-cmdline[218]: dracut-dracut-053 Jun 25 18:47:22.956299 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:47:22.964697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:47:22.968166 systemd-modules-load[194]: Inserted module 'br_netfilter' Jun 25 18:47:22.969157 kernel: Bridge firewalling registered Jun 25 18:47:22.970083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:22.978808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:47:22.989647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:22.996831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:47:23.045057 systemd-resolved[276]: Positive Trust Anchors: Jun 25 18:47:23.045071 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:47:23.045101 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:47:23.055702 kernel: SCSI subsystem initialized Jun 25 18:47:23.055901 systemd-resolved[276]: Defaulting to hostname 'linux'. Jun 25 18:47:23.057766 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:47:23.057908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:23.069699 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:47:23.085703 kernel: iscsi: registered transport (tcp) Jun 25 18:47:23.110698 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:47:23.110736 kernel: QLogic iSCSI HBA Driver Jun 25 18:47:23.169288 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:47:23.180789 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:47:23.208711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:47:23.208757 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:47:23.208769 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:47:23.257710 kernel: raid6: avx2x4 gen() 18600 MB/s Jun 25 18:47:23.274707 kernel: raid6: avx2x2 gen() 15411 MB/s Jun 25 18:47:23.292062 kernel: raid6: avx2x1 gen() 15111 MB/s Jun 25 18:47:23.292100 kernel: raid6: using algorithm avx2x4 gen() 18600 MB/s Jun 25 18:47:23.310193 kernel: raid6: .... xor() 5599 MB/s, rmw enabled Jun 25 18:47:23.310247 kernel: raid6: using avx2x2 recovery algorithm Jun 25 18:47:23.342724 kernel: xor: automatically using best checksumming function avx Jun 25 18:47:23.592706 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:47:23.605575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:47:23.618802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:23.631342 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jun 25 18:47:23.636201 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:23.644856 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:47:23.659006 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jun 25 18:47:23.689080 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:47:23.711833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:47:23.777186 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:23.789826 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:47:23.804080 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:47:23.806782 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:47:23.809305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:23.812120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:47:23.824389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:47:23.830670 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:47:23.835402 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 18:47:23.858600 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:47:23.858765 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:47:23.858777 kernel: AES CTR mode by8 optimization enabled Jun 25 18:47:23.858788 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:47:23.858798 kernel: GPT:9289727 != 19775487 Jun 25 18:47:23.858808 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:47:23.858825 kernel: GPT:9289727 != 19775487 Jun 25 18:47:23.858835 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:47:23.858845 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:23.839126 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:47:23.847385 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:47:23.847626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:23.852025 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:23.856467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:23.856796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:23.859489 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:23.880699 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jun 25 18:47:23.882705 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (459) Jun 25 18:47:23.884699 kernel: libata version 3.00 loaded. Jun 25 18:47:23.888555 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:47:23.896507 kernel: scsi host0: ata_piix Jun 25 18:47:23.896672 kernel: scsi host1: ata_piix Jun 25 18:47:23.896857 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 18:47:23.896869 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 18:47:23.889135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:23.904950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:23.913021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:47:23.927852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:47:23.934834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:47:23.940958 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:47:23.943540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:47:23.959834 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:47:23.961838 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:47:23.970834 disk-uuid[545]: Primary Header is updated. Jun 25 18:47:23.970834 disk-uuid[545]: Secondary Entries is updated. Jun 25 18:47:23.970834 disk-uuid[545]: Secondary Header is updated. Jun 25 18:47:23.974712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:23.979704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:23.984019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:24.048732 kernel: ata2: found unknown device (class 0) Jun 25 18:47:24.050699 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 18:47:24.052705 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 18:47:24.105713 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 18:47:24.118479 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:47:24.118501 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 18:47:24.980714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:47:24.981015 disk-uuid[548]: The operation has completed successfully. Jun 25 18:47:25.006579 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:47:25.006716 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:47:25.037814 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:47:25.043369 sh[578]: Success Jun 25 18:47:25.056701 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 18:47:25.089661 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:47:25.100351 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:47:25.105594 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:47:25.114474 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:47:25.114544 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:25.114562 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:47:25.116267 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:47:25.116283 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:47:25.120956 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:47:25.123294 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:47:25.134800 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:47:25.137302 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:47:25.146223 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:25.146251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:25.146262 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:47:25.149700 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:47:25.157871 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:47:25.159701 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:25.168219 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:47:25.176818 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:47:25.225324 ignition[672]: Ignition 2.19.0 Jun 25 18:47:25.225336 ignition[672]: Stage: fetch-offline Jun 25 18:47:25.225391 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:25.225426 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:25.225995 ignition[672]: parsed url from cmdline: "" Jun 25 18:47:25.226000 ignition[672]: no config URL provided Jun 25 18:47:25.226007 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:47:25.226017 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:47:25.226050 ignition[672]: op(1): [started] loading QEMU firmware config module Jun 25 18:47:25.226055 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:47:25.238902 ignition[672]: op(1): [finished] loading QEMU firmware config module Jun 25 18:47:25.260980 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:47:25.274826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:47:25.281868 ignition[672]: parsing config with SHA512: 97bf1ec8c0483f0edad8e3381c6baa68757e8efc7af9800bc4538f179164b0bba473e28d8f4d983e06f227f9811d36f81c50eacf08a1e9763cbef6a62f94ee9a Jun 25 18:47:25.285343 unknown[672]: fetched base config from "system" Jun 25 18:47:25.285356 unknown[672]: fetched user config from "qemu" Jun 25 18:47:25.285873 ignition[672]: fetch-offline: fetch-offline passed Jun 25 18:47:25.285948 ignition[672]: Ignition finished successfully Jun 25 18:47:25.290477 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:47:25.297872 systemd-networkd[768]: lo: Link UP Jun 25 18:47:25.297882 systemd-networkd[768]: lo: Gained carrier Jun 25 18:47:25.299400 systemd-networkd[768]: Enumeration completed Jun 25 18:47:25.299478 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:47:25.299794 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:25.299798 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:47:25.300892 systemd-networkd[768]: eth0: Link UP Jun 25 18:47:25.300896 systemd-networkd[768]: eth0: Gained carrier Jun 25 18:47:25.300903 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:25.300947 systemd[1]: Reached target network.target - Network. Jun 25 18:47:25.302668 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:47:25.311794 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:47:25.319743 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:47:25.326060 ignition[771]: Ignition 2.19.0 Jun 25 18:47:25.326072 ignition[771]: Stage: kargs Jun 25 18:47:25.326280 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:25.326294 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:25.327380 ignition[771]: kargs: kargs passed Jun 25 18:47:25.330923 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:47:25.327430 ignition[771]: Ignition finished successfully Jun 25 18:47:25.341803 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:47:25.355880 ignition[781]: Ignition 2.19.0 Jun 25 18:47:25.355891 ignition[781]: Stage: disks Jun 25 18:47:25.356088 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:25.356099 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:25.358495 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:47:25.356993 ignition[781]: disks: disks passed Jun 25 18:47:25.361030 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:47:25.357035 ignition[781]: Ignition finished successfully Jun 25 18:47:25.362608 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:47:25.362663 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:47:25.363008 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:47:25.363171 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:47:25.370814 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:47:25.383632 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:47:25.389885 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:47:25.403866 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:47:25.500705 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:47:25.501658 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:47:25.503446 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:47:25.511766 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:47:25.513521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:47:25.515074 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:47:25.524934 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jun 25 18:47:25.524959 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:25.524973 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:25.524987 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:47:25.515111 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:47:25.529211 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:47:25.515132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:47:25.521783 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:47:25.525986 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:47:25.530622 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:47:25.563782 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:47:25.568193 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:47:25.571639 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:47:25.575594 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:47:25.658416 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:47:25.671782 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:47:25.673520 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:47:25.681709 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:25.698316 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:47:25.703165 ignition[915]: INFO : Ignition 2.19.0 Jun 25 18:47:25.703165 ignition[915]: INFO : Stage: mount Jun 25 18:47:25.704874 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:25.704874 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:25.704874 ignition[915]: INFO : mount: mount passed Jun 25 18:47:25.704874 ignition[915]: INFO : Ignition finished successfully Jun 25 18:47:25.706389 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:47:25.715847 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:47:26.113942 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:47:26.125804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:47:26.132711 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jun 25 18:47:26.132744 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:47:26.134064 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:47:26.134085 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:47:26.137695 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:47:26.139020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:47:26.166684 ignition[945]: INFO : Ignition 2.19.0 Jun 25 18:47:26.166684 ignition[945]: INFO : Stage: files Jun 25 18:47:26.168443 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:26.168443 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:26.168443 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:47:26.168443 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:47:26.168443 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:47:26.174828 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:47:26.174828 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:47:26.174828 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:47:26.174828 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:47:26.174828 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:47:26.170879 unknown[945]: wrote ssh authorized keys file for user: core Jun 25 18:47:26.193746 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:47:26.252690 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:47:26.254918 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:47:26.254918 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 25 18:47:26.728890 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:47:26.826841 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:47:26.828984 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 18:47:26.839818 systemd-networkd[768]: eth0: Gained IPv6LL Jun 25 18:47:27.102198 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:47:27.541201 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:47:27.541201 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 18:47:27.545854 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:47:27.567251 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:47:27.572828 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:47:27.574828 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:47:27.574828 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:47:27.574828 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:47:27.574828 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:47:27.574828 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:47:27.574828 ignition[945]: INFO : files: files passed Jun 25 18:47:27.574828 ignition[945]: INFO : Ignition finished successfully Jun 25 18:47:27.575924 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:47:27.587886 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:47:27.589950 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:47:27.593929 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:47:27.594093 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:47:27.600072 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:47:27.602477 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:47:27.602477 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:47:27.606425 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:47:27.610842 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:47:27.611154 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:47:27.620821 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:47:27.650151 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:47:27.650292 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:47:27.653162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:47:27.655752 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:47:27.656994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:47:27.657844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:47:27.675958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:47:27.681920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:47:27.692111 systemd[1]: Stopped target network.target - Network. Jun 25 18:47:27.693222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:27.693502 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:27.694076 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:47:27.694448 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:47:27.694554 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:47:27.704326 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:47:27.704460 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:47:27.705024 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:47:27.705409 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:47:27.705974 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:47:27.706361 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:47:27.706755 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:47:27.707320 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:47:27.707706 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:47:27.708272 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:47:27.708622 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:47:27.708739 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:47:27.728006 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:27.729256 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:27.729567 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:47:27.729939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:27.736376 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:47:27.736555 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:47:27.741176 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:47:27.741307 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:47:27.742569 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:47:27.743053 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:47:27.747809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:27.748076 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:47:27.750977 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:47:27.751429 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:47:27.751545 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:47:27.756068 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:47:27.756164 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:47:27.758235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:47:27.758355 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:47:27.760746 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:47:27.760854 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:47:27.777879 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:47:27.779815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:47:27.782692 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:47:27.785360 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:47:27.787835 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:47:27.788021 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:27.792147 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:47:27.792312 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:47:27.797706 ignition[1000]: INFO : Ignition 2.19.0 Jun 25 18:47:27.797706 ignition[1000]: INFO : Stage: umount Jun 25 18:47:27.797706 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:47:27.797706 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:47:27.797706 ignition[1000]: INFO : umount: umount passed Jun 25 18:47:27.797706 ignition[1000]: INFO : Ignition finished successfully Jun 25 18:47:27.792746 systemd-networkd[768]: eth0: DHCPv6 lease lost Jun 25 18:47:27.799890 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:47:27.800041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:47:27.803580 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:47:27.803817 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:47:27.806417 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:47:27.806527 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:47:27.810087 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:47:27.810239 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:47:27.813088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:47:27.815414 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:47:27.815465 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:27.817354 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:47:27.817413 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:47:27.819733 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:47:27.819781 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:47:27.822053 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:47:27.822098 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:47:27.824286 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:47:27.824332 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:47:27.840926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:47:27.842161 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:47:27.842274 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:47:27.844815 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:47:27.844871 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:27.847296 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:47:27.847351 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:27.850121 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:47:27.850197 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:27.852908 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:27.863492 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:47:27.863654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:47:27.872476 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:47:27.872709 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:27.875380 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:47:27.875442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:27.877800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:47:27.877851 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:27.880158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:47:27.880222 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:47:27.882698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:47:27.882758 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:47:27.885023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:47:27.885080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:47:27.901866 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:47:27.903196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:47:27.903259 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:27.905900 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:47:27.905949 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:47:27.908668 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:47:27.908732 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:27.910211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:27.910260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:27.912141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:47:27.912250 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:47:27.963505 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:47:27.963645 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:47:27.965935 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:47:27.966883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:47:27.966941 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:47:27.978813 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:47:27.987829 systemd[1]: Switching root. Jun 25 18:47:28.019825 systemd-journald[192]: Journal stopped Jun 25 18:47:29.201939 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jun 25 18:47:29.202004 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:47:29.202023 kernel: SELinux: policy capability open_perms=1 Jun 25 18:47:29.202035 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:47:29.202046 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:47:29.202058 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:47:29.202069 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:47:29.202086 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:47:29.202112 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:47:29.202124 kernel: audit: type=1403 audit(1719341248.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:47:29.202137 systemd[1]: Successfully loaded SELinux policy in 39.652ms. Jun 25 18:47:29.202163 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.175ms. Jun 25 18:47:29.202176 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:47:29.202188 systemd[1]: Detected virtualization kvm. Jun 25 18:47:29.202201 systemd[1]: Detected architecture x86-64. Jun 25 18:47:29.202212 systemd[1]: Detected first boot. Jun 25 18:47:29.202227 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:47:29.202239 zram_generator::config[1044]: No configuration found. Jun 25 18:47:29.202253 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:47:29.202265 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:47:29.202277 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:47:29.202289 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:47:29.202303 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:47:29.202315 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:47:29.202330 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:47:29.202342 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:47:29.202354 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:47:29.202366 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:47:29.202379 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:47:29.202392 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:47:29.202403 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:47:29.202416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:47:29.202432 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:47:29.202447 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:47:29.202459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:47:29.202472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:47:29.202484 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:47:29.202496 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:47:29.202508 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:47:29.202520 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:47:29.202533 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:47:29.202552 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:47:29.202565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:47:29.202577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:47:29.202589 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:47:29.202602 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:47:29.202614 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:47:29.202626 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:47:29.202638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:47:29.202650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:47:29.202665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:47:29.202690 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:47:29.202702 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:47:29.202715 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:47:29.202727 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:47:29.202739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:29.202752 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:47:29.202764 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:47:29.202776 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:47:29.202791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:47:29.202804 systemd[1]: Reached target machines.target - Containers. Jun 25 18:47:29.202816 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:47:29.202829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:47:29.202841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:47:29.202853 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:47:29.202865 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:47:29.202877 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:47:29.202891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:47:29.202904 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:47:29.202916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:47:29.202928 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:47:29.202940 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:47:29.202952 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:47:29.202964 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:47:29.202979 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:47:29.202993 kernel: loop: module loaded Jun 25 18:47:29.203005 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:47:29.203018 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:47:29.203030 kernel: fuse: init (API version 7.39) Jun 25 18:47:29.203041 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:47:29.203054 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:47:29.203086 systemd-journald[1111]: Collecting audit messages is disabled. Jun 25 18:47:29.203121 systemd-journald[1111]: Journal started Jun 25 18:47:29.203145 systemd-journald[1111]: Runtime Journal (/run/log/journal/1a73ac138775407baebd96e60a44bd3f) is 6.0M, max 48.3M, 42.3M free. Jun 25 18:47:28.989077 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:47:29.006659 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:47:29.007123 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:47:29.212403 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:47:29.212475 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:47:29.212491 systemd[1]: Stopped verity-setup.service. Jun 25 18:47:29.212506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:29.214785 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:47:29.220257 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:47:29.221615 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:47:29.223014 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:47:29.224177 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:47:29.225436 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:47:29.226782 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:47:29.228218 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:47:29.229994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:47:29.232021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:47:29.232260 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:47:29.234495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:47:29.234733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:47:29.236599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:47:29.236838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:47:29.238806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:47:29.239024 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:47:29.240694 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:47:29.241052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:47:29.243093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:47:29.244712 kernel: ACPI: bus type drm_connector registered Jun 25 18:47:29.245663 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:47:29.248474 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:47:29.248727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:47:29.250436 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:47:29.266859 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:47:29.276764 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:47:29.279374 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:47:29.280759 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:47:29.280788 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:47:29.283029 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:47:29.288812 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:47:29.292557 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:47:29.294030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:47:29.297356 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:47:29.301734 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:47:29.303421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:47:29.304823 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:47:29.306809 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:47:29.313782 systemd-journald[1111]: Time spent on flushing to /var/log/journal/1a73ac138775407baebd96e60a44bd3f is 26.990ms for 989 entries. Jun 25 18:47:29.313782 systemd-journald[1111]: System Journal (/var/log/journal/1a73ac138775407baebd96e60a44bd3f) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:47:29.365157 systemd-journald[1111]: Received client request to flush runtime journal. Jun 25 18:47:29.365225 kernel: loop0: detected capacity change from 0 to 80568 Jun 25 18:47:29.365258 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:47:29.312445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:47:29.315982 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:47:29.319913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:47:29.323784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:47:29.326804 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:47:29.331076 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:47:29.332707 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:47:29.369354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:47:29.334329 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:47:29.344955 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:47:29.347751 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:47:29.354339 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:47:29.356141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:47:29.370307 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:47:29.372706 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:47:29.381767 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:47:29.382423 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:47:29.387003 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jun 25 18:47:29.387025 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jun 25 18:47:29.394983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:47:29.402737 kernel: loop1: detected capacity change from 0 to 139760 Jun 25 18:47:29.404950 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:47:29.434726 kernel: loop2: detected capacity change from 0 to 211296 Jun 25 18:47:29.436416 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:47:29.448898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:47:29.468353 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jun 25 18:47:29.468373 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jun 25 18:47:29.474100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:47:29.479695 kernel: loop3: detected capacity change from 0 to 80568 Jun 25 18:47:29.487729 kernel: loop4: detected capacity change from 0 to 139760 Jun 25 18:47:29.501718 kernel: loop5: detected capacity change from 0 to 211296 Jun 25 18:47:29.508157 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:47:29.508792 (sd-merge)[1184]: Merged extensions into '/usr'. Jun 25 18:47:29.513620 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:47:29.513637 systemd[1]: Reloading... Jun 25 18:47:29.565704 zram_generator::config[1212]: No configuration found. Jun 25 18:47:29.663700 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:47:29.688873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:29.737998 systemd[1]: Reloading finished in 223 ms. Jun 25 18:47:29.768012 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:47:29.769880 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:47:29.798884 systemd[1]: Starting ensure-sysext.service... Jun 25 18:47:29.801009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:47:29.810914 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:47:29.810931 systemd[1]: Reloading... Jun 25 18:47:29.831117 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:47:29.831565 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:47:29.832828 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:47:29.833247 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jun 25 18:47:29.833337 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jun 25 18:47:29.837497 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:47:29.837514 systemd-tmpfiles[1246]: Skipping /boot Jun 25 18:47:29.852659 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:47:29.852693 systemd-tmpfiles[1246]: Skipping /boot Jun 25 18:47:29.881718 zram_generator::config[1278]: No configuration found. Jun 25 18:47:30.007401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:30.075771 systemd[1]: Reloading finished in 264 ms. Jun 25 18:47:30.098772 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:47:30.113259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:47:30.121114 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:47:30.124116 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:47:30.126816 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:47:30.132365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:47:30.138795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:47:30.143736 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:47:30.150782 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:47:30.154258 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:30.154815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:47:30.156375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:47:30.161996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:47:30.167234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:47:30.168764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:47:30.168902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:30.171899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:47:30.172821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:47:30.175631 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jun 25 18:47:30.176876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:47:30.177173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:47:30.179846 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:47:30.180104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:47:30.188270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:47:30.188570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:47:30.189318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:47:30.192050 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:47:30.201239 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:30.202131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:47:30.205530 augenrules[1340]: No rules Jun 25 18:47:30.208003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:47:30.212778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:47:30.217227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:47:30.220020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:47:30.221857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:47:30.226864 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:47:30.228229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:47:30.230431 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:47:30.233292 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:47:30.236741 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:47:30.239323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:47:30.242914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:47:30.245469 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:47:30.246903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:47:30.258065 systemd[1]: Finished ensure-sysext.service. Jun 25 18:47:30.276145 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:47:30.276374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:47:30.279137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:47:30.279362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:47:30.309990 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Jun 25 18:47:30.315155 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:47:30.317047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:47:30.327606 systemd-resolved[1314]: Positive Trust Anchors: Jun 25 18:47:30.327617 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:47:30.327657 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:47:30.333765 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1364) Jun 25 18:47:30.335495 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jun 25 18:47:30.338899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:47:30.349257 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:47:30.359999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:47:30.366061 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:47:30.367484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:47:30.367533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:47:30.372614 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jun 25 18:47:30.371416 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:47:30.373492 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:47:30.375704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:47:30.385952 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 25 18:47:30.384991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:47:30.393775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 18:47:30.404915 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:47:30.409376 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:47:30.476783 systemd-networkd[1389]: lo: Link UP Jun 25 18:47:30.476793 systemd-networkd[1389]: lo: Gained carrier Jun 25 18:47:30.478402 systemd-networkd[1389]: Enumeration completed Jun 25 18:47:30.517985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:30.518516 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:30.518521 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:47:30.519397 systemd-networkd[1389]: eth0: Link UP Jun 25 18:47:30.519402 systemd-networkd[1389]: eth0: Gained carrier Jun 25 18:47:30.519414 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:47:30.519445 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:47:30.521241 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:47:30.540707 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:47:30.537879 systemd[1]: Reached target network.target - Network. Jun 25 18:47:30.540009 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:47:30.547948 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:47:30.551757 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:47:30.554517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:47:30.554921 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jun 25 18:47:30.555113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:31.277744 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:47:31.277794 systemd-timesyncd[1391]: Initial clock synchronization to Tue 2024-06-25 18:47:31.276704 UTC. Jun 25 18:47:31.277911 kernel: kvm_amd: TSC scaling supported Jun 25 18:47:31.277946 kernel: kvm_amd: Nested Virtualization enabled Jun 25 18:47:31.277962 kernel: kvm_amd: Nested Paging enabled Jun 25 18:47:31.277978 kernel: kvm_amd: LBR virtualization supported Jun 25 18:47:31.277980 systemd-resolved[1314]: Clock change detected. Flushing caches. Jun 25 18:47:31.279038 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 25 18:47:31.279067 kernel: kvm_amd: Virtual GIF supported Jun 25 18:47:31.281787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:47:31.308688 kernel: EDAC MC: Ver: 3.0.0 Jun 25 18:47:31.341324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:47:31.364057 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:47:31.377773 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:47:31.387777 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:47:31.417833 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:47:31.419400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:47:31.420574 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:47:31.421786 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:47:31.423098 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:47:31.424736 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:47:31.426066 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:47:31.427386 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:47:31.428697 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:47:31.428728 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:47:31.429670 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:47:31.431495 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:47:31.434357 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:47:31.449423 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:47:31.452155 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:47:31.453855 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:47:31.455131 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:47:31.456247 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:47:31.457284 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:47:31.457314 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:47:31.458505 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:47:31.460674 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:47:31.462754 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:47:31.463825 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:47:31.469091 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:47:31.470308 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:47:31.471887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:47:31.475548 jq[1422]: false Jun 25 18:47:31.476746 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:47:31.479436 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:47:31.482446 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:47:31.488888 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:47:31.490556 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:47:31.491385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:47:31.494550 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:47:31.499792 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:47:31.501000 extend-filesystems[1423]: Found loop3 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found loop4 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found loop5 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found sr0 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda1 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda2 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda3 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found usr Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda4 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda6 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda7 Jun 25 18:47:31.501000 extend-filesystems[1423]: Found vda9 Jun 25 18:47:31.501000 extend-filesystems[1423]: Checking size of /dev/vda9 Jun 25 18:47:31.502030 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:47:31.515858 dbus-daemon[1421]: [system] SELinux support is enabled Jun 25 18:47:31.507039 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:47:31.507258 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:47:31.531790 jq[1434]: true Jun 25 18:47:31.512410 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:47:31.512733 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:47:31.519503 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:47:31.529934 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:47:31.530192 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:47:31.543270 jq[1444]: true Jun 25 18:47:31.551925 extend-filesystems[1423]: Resized partition /dev/vda9 Jun 25 18:47:31.556088 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:47:31.556906 update_engine[1431]: I0625 18:47:31.556418 1431 main.cc:92] Flatcar Update Engine starting Jun 25 18:47:31.557597 extend-filesystems[1456]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:47:31.575796 tar[1441]: linux-amd64/helm Jun 25 18:47:31.575996 update_engine[1431]: I0625 18:47:31.575150 1431 update_check_scheduler.cc:74] Next update check in 8m28s Jun 25 18:47:31.562273 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:47:31.583740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1357) Jun 25 18:47:31.579446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:47:31.579479 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:47:31.580993 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:47:31.581009 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:47:31.588903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:47:31.598188 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:47:31.621182 systemd-logind[1429]: Watching system buttons on /dev/input/event2 (Power Button) Jun 25 18:47:31.621210 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:47:31.626515 systemd-logind[1429]: New seat seat0. Jun 25 18:47:31.631627 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:47:31.636750 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:47:31.646143 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:47:31.662601 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:47:31.662601 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:47:31.662601 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:47:31.673963 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Jun 25 18:47:31.674045 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:47:31.665006 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:47:31.665248 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:47:31.675380 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:47:31.678427 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:47:31.689184 sshd_keygen[1443]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:47:31.718824 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:47:31.730229 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:47:31.739167 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:47:31.739496 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:47:31.749377 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:47:31.761075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:47:31.764161 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:47:31.770817 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:47:31.772250 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:47:31.805754 containerd[1445]: time="2024-06-25T18:47:31.805625455Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:47:31.833156 containerd[1445]: time="2024-06-25T18:47:31.833090442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:47:31.833156 containerd[1445]: time="2024-06-25T18:47:31.833149853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835219 containerd[1445]: time="2024-06-25T18:47:31.835171825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835219 containerd[1445]: time="2024-06-25T18:47:31.835206860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835578 containerd[1445]: time="2024-06-25T18:47:31.835542500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835578 containerd[1445]: time="2024-06-25T18:47:31.835567877Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:47:31.835741 containerd[1445]: time="2024-06-25T18:47:31.835709924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835820 containerd[1445]: time="2024-06-25T18:47:31.835796947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835842 containerd[1445]: time="2024-06-25T18:47:31.835817165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.835939 containerd[1445]: time="2024-06-25T18:47:31.835917673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.836254 containerd[1445]: time="2024-06-25T18:47:31.836216564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.836254 containerd[1445]: time="2024-06-25T18:47:31.836244466Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:47:31.836315 containerd[1445]: time="2024-06-25T18:47:31.836258843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:47:31.836447 containerd[1445]: time="2024-06-25T18:47:31.836407251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:47:31.836447 containerd[1445]: time="2024-06-25T18:47:31.836430795Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:47:31.836521 containerd[1445]: time="2024-06-25T18:47:31.836505275Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:47:31.836550 containerd[1445]: time="2024-06-25T18:47:31.836519872Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842267279Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842325508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842340235Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842388616Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842406249Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842420105Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842435604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842611775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:47:31.842703 containerd[1445]: time="2024-06-25T18:47:31.842628686Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:47:31.843076 containerd[1445]: time="2024-06-25T18:47:31.843045848Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:47:31.843076 containerd[1445]: time="2024-06-25T18:47:31.843071336Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:47:31.843076 containerd[1445]: time="2024-06-25T18:47:31.843088048Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843108967Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843125307Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843139805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843155885Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843171825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843185771Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843203 containerd[1445]: time="2024-06-25T18:47:31.843202813Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:47:31.843388 containerd[1445]: time="2024-06-25T18:47:31.843331945Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:47:31.843808 containerd[1445]: time="2024-06-25T18:47:31.843782720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:47:31.843854 containerd[1445]: time="2024-06-25T18:47:31.843815091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.843854 containerd[1445]: time="2024-06-25T18:47:31.843833275Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:47:31.843898 containerd[1445]: time="2024-06-25T18:47:31.843858272Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:47:31.843951 containerd[1445]: time="2024-06-25T18:47:31.843931239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844073 containerd[1445]: time="2024-06-25T18:47:31.844047857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844073 containerd[1445]: time="2024-06-25T18:47:31.844068776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844133 containerd[1445]: time="2024-06-25T18:47:31.844083023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844133 containerd[1445]: time="2024-06-25T18:47:31.844098482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844133 containerd[1445]: time="2024-06-25T18:47:31.844113450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844133 containerd[1445]: time="2024-06-25T18:47:31.844129901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844204 containerd[1445]: time="2024-06-25T18:47:31.844143497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844204 containerd[1445]: time="2024-06-25T18:47:31.844167441Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:47:31.844354 containerd[1445]: time="2024-06-25T18:47:31.844330507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844377 containerd[1445]: time="2024-06-25T18:47:31.844352769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844377 containerd[1445]: time="2024-06-25T18:47:31.844368639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844414 containerd[1445]: time="2024-06-25T18:47:31.844385821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844414 containerd[1445]: time="2024-06-25T18:47:31.844401059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844451 containerd[1445]: time="2024-06-25T18:47:31.844418031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844451 containerd[1445]: time="2024-06-25T18:47:31.844432589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844451 containerd[1445]: time="2024-06-25T18:47:31.844445803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:47:31.844822 containerd[1445]: time="2024-06-25T18:47:31.844760674Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:47:31.844822 containerd[1445]: time="2024-06-25T18:47:31.844824283Z" level=info msg="Connect containerd service" Jun 25 18:47:31.844978 containerd[1445]: time="2024-06-25T18:47:31.844853107Z" level=info msg="using legacy CRI server" Jun 25 18:47:31.844978 containerd[1445]: time="2024-06-25T18:47:31.844861252Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:47:31.844978 containerd[1445]: time="2024-06-25T18:47:31.844970978Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:47:31.845657 containerd[1445]: time="2024-06-25T18:47:31.845620756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:47:31.845698 containerd[1445]: time="2024-06-25T18:47:31.845685448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:47:31.845723 containerd[1445]: time="2024-06-25T18:47:31.845706046Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:47:31.845723 containerd[1445]: time="2024-06-25T18:47:31.845718780Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:47:31.845780 containerd[1445]: time="2024-06-25T18:47:31.845733678Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.845918344Z" level=info msg="Start subscribing containerd event" Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846003975Z" level=info msg="Start recovering state" Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846101438Z" level=info msg="Start event monitor" Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846115324Z" level=info msg="Start snapshots syncer" Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846126144Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846134880Z" level=info msg="Start streaming server" Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846206725Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846265525Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:47:31.846679 containerd[1445]: time="2024-06-25T18:47:31.846388616Z" level=info msg="containerd successfully booted in 0.043309s" Jun 25 18:47:31.846508 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:47:32.033398 tar[1441]: linux-amd64/LICENSE Jun 25 18:47:32.033501 tar[1441]: linux-amd64/README.md Jun 25 18:47:32.052388 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:47:32.614871 systemd-networkd[1389]: eth0: Gained IPv6LL Jun 25 18:47:32.618264 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:47:32.624811 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:47:32.636847 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:47:32.639506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:32.647353 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:47:32.669464 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:47:32.669730 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:47:32.671723 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:47:32.674286 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:47:33.285713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:33.287856 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:47:33.289574 systemd[1]: Startup finished in 889ms (kernel) + 5.691s (initrd) + 4.195s (userspace) = 10.776s. Jun 25 18:47:33.291597 (kubelet)[1533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:47:33.788252 kubelet[1533]: E0625 18:47:33.788167 1533 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:47:33.793098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:47:33.793298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:47:33.793604 systemd[1]: kubelet.service: Consumed 1.022s CPU time. Jun 25 18:47:38.264740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:47:38.265910 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:34036.service - OpenSSH per-connection server daemon (10.0.0.1:34036). Jun 25 18:47:38.312328 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 34036 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:38.314069 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.322536 systemd-logind[1429]: New session 1 of user core. Jun 25 18:47:38.323825 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:47:38.330878 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:47:38.343063 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:47:38.353051 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:47:38.355781 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.458452 systemd[1552]: Queued start job for default target default.target. Jun 25 18:47:38.473051 systemd[1552]: Created slice app.slice - User Application Slice. Jun 25 18:47:38.473080 systemd[1552]: Reached target paths.target - Paths. Jun 25 18:47:38.473095 systemd[1552]: Reached target timers.target - Timers. Jun 25 18:47:38.474719 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:47:38.486453 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:47:38.486635 systemd[1552]: Reached target sockets.target - Sockets. Jun 25 18:47:38.486682 systemd[1552]: Reached target basic.target - Basic System. Jun 25 18:47:38.486738 systemd[1552]: Reached target default.target - Main User Target. Jun 25 18:47:38.486786 systemd[1552]: Startup finished in 123ms. Jun 25 18:47:38.487315 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:47:38.489144 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:47:38.550795 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:34046.service - OpenSSH per-connection server daemon (10.0.0.1:34046). Jun 25 18:47:38.592032 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 34046 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:38.593739 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.597693 systemd-logind[1429]: New session 2 of user core. Jun 25 18:47:38.611760 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:47:38.667138 sshd[1563]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:38.682567 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:34046.service: Deactivated successfully. Jun 25 18:47:38.684380 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:47:38.685982 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:47:38.703032 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:34048.service - OpenSSH per-connection server daemon (10.0.0.1:34048). Jun 25 18:47:38.703902 systemd-logind[1429]: Removed session 2. Jun 25 18:47:38.730910 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 34048 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:38.732198 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.735560 systemd-logind[1429]: New session 3 of user core. Jun 25 18:47:38.736711 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:47:38.786093 sshd[1570]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:38.801337 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:34048.service: Deactivated successfully. Jun 25 18:47:38.803142 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:47:38.804707 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:47:38.813968 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:34064.service - OpenSSH per-connection server daemon (10.0.0.1:34064). Jun 25 18:47:38.814830 systemd-logind[1429]: Removed session 3. Jun 25 18:47:38.841829 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 34064 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:38.843158 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.846700 systemd-logind[1429]: New session 4 of user core. Jun 25 18:47:38.859755 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:47:38.912758 sshd[1578]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:38.933146 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:34064.service: Deactivated successfully. Jun 25 18:47:38.934577 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:47:38.935980 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:47:38.937114 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:34074.service - OpenSSH per-connection server daemon (10.0.0.1:34074). Jun 25 18:47:38.937812 systemd-logind[1429]: Removed session 4. Jun 25 18:47:38.969199 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 34074 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:38.970533 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:38.974025 systemd-logind[1429]: New session 5 of user core. Jun 25 18:47:38.984749 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:47:39.041822 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:47:39.042106 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:39.056713 sudo[1588]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:39.058388 sshd[1585]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:39.067173 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:34074.service: Deactivated successfully. Jun 25 18:47:39.068774 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:47:39.070357 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:47:39.077043 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:34078.service - OpenSSH per-connection server daemon (10.0.0.1:34078). Jun 25 18:47:39.077887 systemd-logind[1429]: Removed session 5. Jun 25 18:47:39.104767 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 34078 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:39.106183 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:39.109771 systemd-logind[1429]: New session 6 of user core. Jun 25 18:47:39.119753 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:47:39.173102 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:47:39.173375 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:39.177380 sudo[1597]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:39.185022 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:47:39.185401 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:39.203876 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:47:39.205415 auditctl[1600]: No rules Jun 25 18:47:39.205822 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:47:39.206041 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:47:39.208656 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:47:39.238909 augenrules[1618]: No rules Jun 25 18:47:39.240864 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:47:39.242109 sudo[1596]: pam_unix(sudo:session): session closed for user root Jun 25 18:47:39.244218 sshd[1593]: pam_unix(sshd:session): session closed for user core Jun 25 18:47:39.257202 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:34078.service: Deactivated successfully. Jun 25 18:47:39.259613 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:47:39.261587 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:47:39.269092 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). Jun 25 18:47:39.270143 systemd-logind[1429]: Removed session 6. Jun 25 18:47:39.296609 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:47:39.298120 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:47:39.302208 systemd-logind[1429]: New session 7 of user core. Jun 25 18:47:39.315768 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:47:39.369719 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:47:39.370012 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:47:39.471900 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:47:39.472037 (dockerd)[1640]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:47:39.715171 dockerd[1640]: time="2024-06-25T18:47:39.715034104Z" level=info msg="Starting up" Jun 25 18:47:40.304506 dockerd[1640]: time="2024-06-25T18:47:40.304460138Z" level=info msg="Loading containers: start." Jun 25 18:47:40.410663 kernel: Initializing XFRM netlink socket Jun 25 18:47:40.493489 systemd-networkd[1389]: docker0: Link UP Jun 25 18:47:40.513169 dockerd[1640]: time="2024-06-25T18:47:40.513139692Z" level=info msg="Loading containers: done." Jun 25 18:47:40.558671 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2304252758-merged.mount: Deactivated successfully. Jun 25 18:47:40.561938 dockerd[1640]: time="2024-06-25T18:47:40.561902873Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:47:40.562097 dockerd[1640]: time="2024-06-25T18:47:40.562077681Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:47:40.562202 dockerd[1640]: time="2024-06-25T18:47:40.562178680Z" level=info msg="Daemon has completed initialization" Jun 25 18:47:40.590556 dockerd[1640]: time="2024-06-25T18:47:40.590511604Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:47:40.590751 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:47:41.222008 containerd[1445]: time="2024-06-25T18:47:41.221958187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 18:47:41.906896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632237649.mount: Deactivated successfully. Jun 25 18:47:42.968769 containerd[1445]: time="2024-06-25T18:47:42.968710196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:42.969562 containerd[1445]: time="2024-06-25T18:47:42.969520906Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jun 25 18:47:42.970827 containerd[1445]: time="2024-06-25T18:47:42.970800495Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:42.974831 containerd[1445]: time="2024-06-25T18:47:42.974471829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:42.975618 containerd[1445]: time="2024-06-25T18:47:42.975586629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 1.753585693s" Jun 25 18:47:42.975663 containerd[1445]: time="2024-06-25T18:47:42.975618509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 25 18:47:42.997831 containerd[1445]: time="2024-06-25T18:47:42.997783398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 18:47:44.026371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:47:44.039907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:44.191132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:44.196031 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:47:44.243738 kubelet[1853]: E0625 18:47:44.243634 1853 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:47:44.251437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:47:44.251622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:47:44.994279 containerd[1445]: time="2024-06-25T18:47:44.994209958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:44.995150 containerd[1445]: time="2024-06-25T18:47:44.995082153Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jun 25 18:47:44.996316 containerd[1445]: time="2024-06-25T18:47:44.996288094Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:44.999012 containerd[1445]: time="2024-06-25T18:47:44.998979821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:44.999894 containerd[1445]: time="2024-06-25T18:47:44.999864099Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.002035907s" Jun 25 18:47:44.999894 containerd[1445]: time="2024-06-25T18:47:44.999894015Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 25 18:47:45.024451 containerd[1445]: time="2024-06-25T18:47:45.024386829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 18:47:46.196808 containerd[1445]: time="2024-06-25T18:47:46.196735663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:46.197531 containerd[1445]: time="2024-06-25T18:47:46.197474348Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jun 25 18:47:46.198780 containerd[1445]: time="2024-06-25T18:47:46.198734771Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:46.201490 containerd[1445]: time="2024-06-25T18:47:46.201434473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:46.202511 containerd[1445]: time="2024-06-25T18:47:46.202476878Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.178044393s" Jun 25 18:47:46.202569 containerd[1445]: time="2024-06-25T18:47:46.202511693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 25 18:47:46.225284 containerd[1445]: time="2024-06-25T18:47:46.225244277Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 18:47:47.139566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856210476.mount: Deactivated successfully. Jun 25 18:47:47.709794 containerd[1445]: time="2024-06-25T18:47:47.709730668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:47.710816 containerd[1445]: time="2024-06-25T18:47:47.710767092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jun 25 18:47:47.712072 containerd[1445]: time="2024-06-25T18:47:47.712036321Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:47.714120 containerd[1445]: time="2024-06-25T18:47:47.714087998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:47.714752 containerd[1445]: time="2024-06-25T18:47:47.714711437Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 1.489423238s" Jun 25 18:47:47.714752 containerd[1445]: time="2024-06-25T18:47:47.714741554Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 25 18:47:47.736259 containerd[1445]: time="2024-06-25T18:47:47.736206460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:47:48.408892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173773748.mount: Deactivated successfully. Jun 25 18:47:49.145249 containerd[1445]: time="2024-06-25T18:47:49.145192339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:49.146256 containerd[1445]: time="2024-06-25T18:47:49.146177075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 18:47:49.147422 containerd[1445]: time="2024-06-25T18:47:49.147387685Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:49.150295 containerd[1445]: time="2024-06-25T18:47:49.150253699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:49.151539 containerd[1445]: time="2024-06-25T18:47:49.151509253Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.415259581s" Jun 25 18:47:49.151573 containerd[1445]: time="2024-06-25T18:47:49.151540231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:47:49.176237 containerd[1445]: time="2024-06-25T18:47:49.176198615Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:47:49.674049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133046546.mount: Deactivated successfully. Jun 25 18:47:49.680133 containerd[1445]: time="2024-06-25T18:47:49.680090372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:49.680811 containerd[1445]: time="2024-06-25T18:47:49.680757433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:47:49.681994 containerd[1445]: time="2024-06-25T18:47:49.681962072Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:49.684317 containerd[1445]: time="2024-06-25T18:47:49.684286239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:49.684960 containerd[1445]: time="2024-06-25T18:47:49.684929766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 508.690544ms" Jun 25 18:47:49.684998 containerd[1445]: time="2024-06-25T18:47:49.684958059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:47:49.705352 containerd[1445]: time="2024-06-25T18:47:49.705311802Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:47:50.249946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141939528.mount: Deactivated successfully. Jun 25 18:47:52.209161 containerd[1445]: time="2024-06-25T18:47:52.209098361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:52.210092 containerd[1445]: time="2024-06-25T18:47:52.210041620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 18:47:52.211489 containerd[1445]: time="2024-06-25T18:47:52.211452455Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:52.214572 containerd[1445]: time="2024-06-25T18:47:52.214530637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:47:52.215657 containerd[1445]: time="2024-06-25T18:47:52.215621222Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.510271799s" Jun 25 18:47:52.215702 containerd[1445]: time="2024-06-25T18:47:52.215663321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:47:54.169079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:54.178870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:54.198097 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-7.scope)... Jun 25 18:47:54.198112 systemd[1]: Reloading... Jun 25 18:47:54.273670 zram_generator::config[2114]: No configuration found. Jun 25 18:47:54.439366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:47:54.516395 systemd[1]: Reloading finished in 317 ms. Jun 25 18:47:54.566025 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:47:54.566148 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:47:54.566456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:54.568384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:47:54.725235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:47:54.729623 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:47:54.773249 kubelet[2162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:47:54.773249 kubelet[2162]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:47:54.773249 kubelet[2162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:47:54.773682 kubelet[2162]: I0625 18:47:54.773302 2162 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:47:55.138663 kubelet[2162]: I0625 18:47:55.138602 2162 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:47:55.138663 kubelet[2162]: I0625 18:47:55.138632 2162 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:47:55.140295 kubelet[2162]: I0625 18:47:55.139147 2162 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:47:55.157515 kubelet[2162]: E0625 18:47:55.157471 2162 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.158209 kubelet[2162]: I0625 18:47:55.158184 2162 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:47:55.168438 kubelet[2162]: I0625 18:47:55.168408 2162 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:47:55.169169 kubelet[2162]: I0625 18:47:55.169146 2162 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:47:55.169347 kubelet[2162]: I0625 18:47:55.169318 2162 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:47:55.169422 kubelet[2162]: I0625 18:47:55.169354 2162 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:47:55.169422 kubelet[2162]: I0625 18:47:55.169366 2162 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:47:55.169501 kubelet[2162]: I0625 18:47:55.169487 2162 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:47:55.169610 kubelet[2162]: I0625 18:47:55.169585 2162 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:47:55.169610 kubelet[2162]: I0625 18:47:55.169600 2162 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:47:55.169683 kubelet[2162]: I0625 18:47:55.169631 2162 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:47:55.169683 kubelet[2162]: I0625 18:47:55.169659 2162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:47:55.171164 kubelet[2162]: I0625 18:47:55.171124 2162 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:47:55.171663 kubelet[2162]: W0625 18:47:55.171599 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.171706 kubelet[2162]: E0625 18:47:55.171684 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.173415 kubelet[2162]: W0625 18:47:55.173341 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.173415 kubelet[2162]: E0625 18:47:55.173384 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.174702 kubelet[2162]: I0625 18:47:55.174522 2162 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:47:55.175873 kubelet[2162]: W0625 18:47:55.175405 2162 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:47:55.177109 kubelet[2162]: I0625 18:47:55.176966 2162 server.go:1256] "Started kubelet" Jun 25 18:47:55.177109 kubelet[2162]: I0625 18:47:55.177040 2162 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:47:55.178198 kubelet[2162]: I0625 18:47:55.177896 2162 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:47:55.178198 kubelet[2162]: I0625 18:47:55.178087 2162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:47:55.179229 kubelet[2162]: I0625 18:47:55.178878 2162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:47:55.179229 kubelet[2162]: I0625 18:47:55.179082 2162 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:47:55.180283 kubelet[2162]: E0625 18:47:55.179544 2162 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:47:55.180283 kubelet[2162]: I0625 18:47:55.179581 2162 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:47:55.180283 kubelet[2162]: I0625 18:47:55.179706 2162 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:47:55.180283 kubelet[2162]: I0625 18:47:55.179760 2162 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:47:55.180283 kubelet[2162]: W0625 18:47:55.180051 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.180283 kubelet[2162]: E0625 18:47:55.180095 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.181742 kubelet[2162]: E0625 18:47:55.181404 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Jun 25 18:47:55.181957 kubelet[2162]: I0625 18:47:55.181896 2162 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:47:55.183337 kubelet[2162]: I0625 18:47:55.183315 2162 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:47:55.183337 kubelet[2162]: I0625 18:47:55.183331 2162 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:47:55.184244 kubelet[2162]: E0625 18:47:55.184213 2162 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc53c7c2968d2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:47:55.176930606 +0000 UTC m=+0.443357967,LastTimestamp:2024-06-25 18:47:55.176930606 +0000 UTC m=+0.443357967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:47:55.184679 kubelet[2162]: E0625 18:47:55.184659 2162 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:47:55.195140 kubelet[2162]: I0625 18:47:55.195109 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:47:55.196436 kubelet[2162]: I0625 18:47:55.196403 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:47:55.196436 kubelet[2162]: I0625 18:47:55.196433 2162 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:47:55.196541 kubelet[2162]: I0625 18:47:55.196467 2162 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:47:55.196541 kubelet[2162]: E0625 18:47:55.196518 2162 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:47:55.197120 kubelet[2162]: W0625 18:47:55.196983 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.197120 kubelet[2162]: E0625 18:47:55.197032 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:55.198600 kubelet[2162]: I0625 18:47:55.198182 2162 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:47:55.198600 kubelet[2162]: I0625 18:47:55.198200 2162 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:47:55.198600 kubelet[2162]: I0625 18:47:55.198217 2162 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:47:55.281683 kubelet[2162]: I0625 18:47:55.281627 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:47:55.282089 kubelet[2162]: E0625 18:47:55.282053 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jun 25 18:47:55.297237 kubelet[2162]: E0625 18:47:55.297209 2162 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:47:55.382136 kubelet[2162]: E0625 18:47:55.382095 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Jun 25 18:47:55.465567 kubelet[2162]: I0625 18:47:55.465399 2162 policy_none.go:49] "None policy: Start" Jun 25 18:47:55.466682 kubelet[2162]: I0625 18:47:55.466661 2162 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:47:55.466861 kubelet[2162]: I0625 18:47:55.466827 2162 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:47:55.477794 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:47:55.483280 kubelet[2162]: I0625 18:47:55.483260 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:47:55.483613 kubelet[2162]: E0625 18:47:55.483590 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jun 25 18:47:55.491851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:47:55.494968 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:47:55.497706 kubelet[2162]: E0625 18:47:55.497682 2162 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:47:55.504784 kubelet[2162]: I0625 18:47:55.504756 2162 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:47:55.505105 kubelet[2162]: I0625 18:47:55.505081 2162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:47:55.505962 kubelet[2162]: E0625 18:47:55.505940 2162 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:47:55.783242 kubelet[2162]: E0625 18:47:55.783209 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Jun 25 18:47:55.885899 kubelet[2162]: I0625 18:47:55.885856 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:47:55.886338 kubelet[2162]: E0625 18:47:55.886308 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jun 25 18:47:55.898541 kubelet[2162]: I0625 18:47:55.898485 2162 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:47:55.899774 kubelet[2162]: I0625 18:47:55.899726 2162 topology_manager.go:215] "Topology Admit Handler" podUID="af4ef051c08c1b2f0254283ff389d666" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:47:55.900604 kubelet[2162]: I0625 18:47:55.900577 2162 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:47:55.905922 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jun 25 18:47:55.927473 systemd[1]: Created slice kubepods-burstable-podaf4ef051c08c1b2f0254283ff389d666.slice - libcontainer container kubepods-burstable-podaf4ef051c08c1b2f0254283ff389d666.slice. Jun 25 18:47:55.934385 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jun 25 18:47:55.984521 kubelet[2162]: I0625 18:47:55.984452 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:55.984521 kubelet[2162]: I0625 18:47:55.984514 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:55.984521 kubelet[2162]: I0625 18:47:55.984540 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4ef051c08c1b2f0254283ff389d666-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af4ef051c08c1b2f0254283ff389d666\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:55.984738 kubelet[2162]: I0625 18:47:55.984613 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4ef051c08c1b2f0254283ff389d666-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af4ef051c08c1b2f0254283ff389d666\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:55.984738 kubelet[2162]: I0625 18:47:55.984683 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:55.984792 kubelet[2162]: I0625 18:47:55.984730 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:55.984815 kubelet[2162]: I0625 18:47:55.984798 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:47:55.984840 kubelet[2162]: I0625 18:47:55.984825 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:47:55.984865 kubelet[2162]: I0625 18:47:55.984844 2162 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4ef051c08c1b2f0254283ff389d666-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af4ef051c08c1b2f0254283ff389d666\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:47:56.052191 kubelet[2162]: W0625 18:47:56.052026 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.052191 kubelet[2162]: E0625 18:47:56.052094 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.225676 kubelet[2162]: E0625 18:47:56.225449 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:56.226121 containerd[1445]: time="2024-06-25T18:47:56.226073339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jun 25 18:47:56.230494 kubelet[2162]: E0625 18:47:56.230449 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:56.231064 containerd[1445]: time="2024-06-25T18:47:56.231020404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af4ef051c08c1b2f0254283ff389d666,Namespace:kube-system,Attempt:0,}" Jun 25 18:47:56.236225 kubelet[2162]: E0625 18:47:56.236190 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:56.236630 containerd[1445]: time="2024-06-25T18:47:56.236590188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jun 25 18:47:56.353872 kubelet[2162]: W0625 18:47:56.353722 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.353872 kubelet[2162]: E0625 18:47:56.353784 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.584587 kubelet[2162]: E0625 18:47:56.584546 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="1.6s" Jun 25 18:47:56.668502 kubelet[2162]: W0625 18:47:56.668370 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.668502 kubelet[2162]: E0625 18:47:56.668431 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.687667 kubelet[2162]: I0625 18:47:56.687631 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:47:56.687945 kubelet[2162]: E0625 18:47:56.687921 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jun 25 18:47:56.708376 kubelet[2162]: W0625 18:47:56.708339 2162 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:56.708376 kubelet[2162]: E0625 18:47:56.708377 2162 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:57.201164 kubelet[2162]: E0625 18:47:57.201124 2162 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.148:6443: connect: connection refused Jun 25 18:47:57.798228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196489130.mount: Deactivated successfully. Jun 25 18:47:57.844741 containerd[1445]: time="2024-06-25T18:47:57.844688351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:57.847284 containerd[1445]: time="2024-06-25T18:47:57.847234896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:47:57.848277 containerd[1445]: time="2024-06-25T18:47:57.848252234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:57.849246 containerd[1445]: time="2024-06-25T18:47:57.849219156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:57.850251 containerd[1445]: time="2024-06-25T18:47:57.850222237Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:57.851153 containerd[1445]: time="2024-06-25T18:47:57.851121413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:47:57.852224 containerd[1445]: time="2024-06-25T18:47:57.852186991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:47:57.854675 containerd[1445]: time="2024-06-25T18:47:57.854623850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:47:57.856264 containerd[1445]: time="2024-06-25T18:47:57.856226686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.630059731s" Jun 25 18:47:57.857022 containerd[1445]: time="2024-06-25T18:47:57.856990498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.620265137s" Jun 25 18:47:57.857712 containerd[1445]: time="2024-06-25T18:47:57.857682375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.626565721s" Jun 25 18:47:58.095914 containerd[1445]: time="2024-06-25T18:47:58.094334888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:47:58.095914 containerd[1445]: time="2024-06-25T18:47:58.094400381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:58.095914 containerd[1445]: time="2024-06-25T18:47:58.094441488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:47:58.095914 containerd[1445]: time="2024-06-25T18:47:58.094511840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:58.099157 containerd[1445]: time="2024-06-25T18:47:58.098852649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:47:58.099157 containerd[1445]: time="2024-06-25T18:47:58.098912331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:58.099157 containerd[1445]: time="2024-06-25T18:47:58.098936967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:47:58.099157 containerd[1445]: time="2024-06-25T18:47:58.098953068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:58.100181 containerd[1445]: time="2024-06-25T18:47:58.100117982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:47:58.101197 containerd[1445]: time="2024-06-25T18:47:58.101120231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:58.101197 containerd[1445]: time="2024-06-25T18:47:58.101149295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:47:58.101197 containerd[1445]: time="2024-06-25T18:47:58.101160707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:47:58.124778 systemd[1]: Started cri-containerd-0d935ad2d4e71b506d6b9a116be9ebf2db47ecd3f85fce14bd117c818180b84b.scope - libcontainer container 0d935ad2d4e71b506d6b9a116be9ebf2db47ecd3f85fce14bd117c818180b84b. Jun 25 18:47:58.129716 systemd[1]: Started cri-containerd-52c849b7ddca0c76607fdf7cae26c8d4563aee03b84ba6cf52d486a3cc0cccad.scope - libcontainer container 52c849b7ddca0c76607fdf7cae26c8d4563aee03b84ba6cf52d486a3cc0cccad. Jun 25 18:47:58.131227 systemd[1]: Started cri-containerd-5537324cf45e5030faf59f182e005c759af62457a9e007b8a2790f45b8d43d2a.scope - libcontainer container 5537324cf45e5030faf59f182e005c759af62457a9e007b8a2790f45b8d43d2a. Jun 25 18:47:58.185945 kubelet[2162]: E0625 18:47:58.185904 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="3.2s" Jun 25 18:47:58.210805 containerd[1445]: time="2024-06-25T18:47:58.210766167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d935ad2d4e71b506d6b9a116be9ebf2db47ecd3f85fce14bd117c818180b84b\"" Jun 25 18:47:58.213029 kubelet[2162]: E0625 18:47:58.212693 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:58.213842 containerd[1445]: time="2024-06-25T18:47:58.213781631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5537324cf45e5030faf59f182e005c759af62457a9e007b8a2790f45b8d43d2a\"" Jun 25 18:47:58.215373 kubelet[2162]: E0625 18:47:58.215228 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:58.216930 containerd[1445]: time="2024-06-25T18:47:58.216893265Z" level=info msg="CreateContainer within sandbox \"0d935ad2d4e71b506d6b9a116be9ebf2db47ecd3f85fce14bd117c818180b84b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:47:58.217251 containerd[1445]: time="2024-06-25T18:47:58.217023710Z" level=info msg="CreateContainer within sandbox \"5537324cf45e5030faf59f182e005c759af62457a9e007b8a2790f45b8d43d2a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:47:58.221712 containerd[1445]: time="2024-06-25T18:47:58.221683327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af4ef051c08c1b2f0254283ff389d666,Namespace:kube-system,Attempt:0,} returns sandbox id \"52c849b7ddca0c76607fdf7cae26c8d4563aee03b84ba6cf52d486a3cc0cccad\"" Jun 25 18:47:58.222236 kubelet[2162]: E0625 18:47:58.222115 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:58.226604 containerd[1445]: time="2024-06-25T18:47:58.226565170Z" level=info msg="CreateContainer within sandbox \"52c849b7ddca0c76607fdf7cae26c8d4563aee03b84ba6cf52d486a3cc0cccad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:47:58.237353 containerd[1445]: time="2024-06-25T18:47:58.237315888Z" level=info msg="CreateContainer within sandbox \"0d935ad2d4e71b506d6b9a116be9ebf2db47ecd3f85fce14bd117c818180b84b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c20ef7c0400ac3ec33098e0315959369fa7b5e7929b2bf86c452d016fe597180\"" Jun 25 18:47:58.237870 containerd[1445]: time="2024-06-25T18:47:58.237840221Z" level=info msg="StartContainer for \"c20ef7c0400ac3ec33098e0315959369fa7b5e7929b2bf86c452d016fe597180\"" Jun 25 18:47:58.251496 containerd[1445]: time="2024-06-25T18:47:58.251434089Z" level=info msg="CreateContainer within sandbox \"52c849b7ddca0c76607fdf7cae26c8d4563aee03b84ba6cf52d486a3cc0cccad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4466d6408b3cda8b765fc55b16b9cb55402215753ffff7006d1ec1887d5d544b\"" Jun 25 18:47:58.252231 containerd[1445]: time="2024-06-25T18:47:58.252196569Z" level=info msg="StartContainer for \"4466d6408b3cda8b765fc55b16b9cb55402215753ffff7006d1ec1887d5d544b\"" Jun 25 18:47:58.255629 containerd[1445]: time="2024-06-25T18:47:58.255608456Z" level=info msg="CreateContainer within sandbox \"5537324cf45e5030faf59f182e005c759af62457a9e007b8a2790f45b8d43d2a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"97a37600868f91950003f05a02402ae4c3935eaadb03d8a592c8ba33b7ab9835\"" Jun 25 18:47:58.256170 containerd[1445]: time="2024-06-25T18:47:58.256150793Z" level=info msg="StartContainer for \"97a37600868f91950003f05a02402ae4c3935eaadb03d8a592c8ba33b7ab9835\"" Jun 25 18:47:58.267838 systemd[1]: Started cri-containerd-c20ef7c0400ac3ec33098e0315959369fa7b5e7929b2bf86c452d016fe597180.scope - libcontainer container c20ef7c0400ac3ec33098e0315959369fa7b5e7929b2bf86c452d016fe597180. Jun 25 18:47:58.287779 systemd[1]: Started cri-containerd-4466d6408b3cda8b765fc55b16b9cb55402215753ffff7006d1ec1887d5d544b.scope - libcontainer container 4466d6408b3cda8b765fc55b16b9cb55402215753ffff7006d1ec1887d5d544b. Jun 25 18:47:58.290602 kubelet[2162]: I0625 18:47:58.290174 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:47:58.290602 kubelet[2162]: E0625 18:47:58.290474 2162 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Jun 25 18:47:58.292741 systemd[1]: Started cri-containerd-97a37600868f91950003f05a02402ae4c3935eaadb03d8a592c8ba33b7ab9835.scope - libcontainer container 97a37600868f91950003f05a02402ae4c3935eaadb03d8a592c8ba33b7ab9835. Jun 25 18:47:58.344868 containerd[1445]: time="2024-06-25T18:47:58.344722345Z" level=info msg="StartContainer for \"4466d6408b3cda8b765fc55b16b9cb55402215753ffff7006d1ec1887d5d544b\" returns successfully" Jun 25 18:47:58.344868 containerd[1445]: time="2024-06-25T18:47:58.344779883Z" level=info msg="StartContainer for \"97a37600868f91950003f05a02402ae4c3935eaadb03d8a592c8ba33b7ab9835\" returns successfully" Jun 25 18:47:58.367844 containerd[1445]: time="2024-06-25T18:47:58.366890801Z" level=info msg="StartContainer for \"c20ef7c0400ac3ec33098e0315959369fa7b5e7929b2bf86c452d016fe597180\" returns successfully" Jun 25 18:47:59.210727 kubelet[2162]: E0625 18:47:59.210687 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:59.211262 kubelet[2162]: E0625 18:47:59.211241 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:47:59.212582 kubelet[2162]: E0625 18:47:59.212559 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:00.173770 kubelet[2162]: I0625 18:48:00.173718 2162 apiserver.go:52] "Watching apiserver" Jun 25 18:48:00.180241 kubelet[2162]: I0625 18:48:00.180202 2162 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:48:00.216526 kubelet[2162]: E0625 18:48:00.214555 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:00.216526 kubelet[2162]: E0625 18:48:00.214610 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:00.217340 kubelet[2162]: E0625 18:48:00.217309 2162 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 18:48:00.585367 kubelet[2162]: E0625 18:48:00.585336 2162 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 18:48:01.078851 kubelet[2162]: E0625 18:48:01.078819 2162 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 18:48:01.390769 kubelet[2162]: E0625 18:48:01.390612 2162 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:48:01.492177 kubelet[2162]: I0625 18:48:01.492146 2162 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:48:01.499617 kubelet[2162]: I0625 18:48:01.499575 2162 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:48:02.821145 systemd[1]: Reloading requested from client PID 2437 ('systemctl') (unit session-7.scope)... Jun 25 18:48:02.821168 systemd[1]: Reloading... Jun 25 18:48:02.897709 zram_generator::config[2478]: No configuration found. Jun 25 18:48:03.018404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:48:03.125064 systemd[1]: Reloading finished in 303 ms. Jun 25 18:48:03.177927 kubelet[2162]: I0625 18:48:03.177818 2162 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:48:03.177949 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:03.202596 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:48:03.203009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:03.210023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:48:03.355410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:48:03.360880 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:48:03.611049 sudo[2532]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:48:03.611394 sudo[2532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:48:03.616887 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:48:03.616887 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:48:03.616887 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:48:03.617282 kubelet[2519]: I0625 18:48:03.616938 2519 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:48:03.621734 kubelet[2519]: I0625 18:48:03.621703 2519 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:48:03.621734 kubelet[2519]: I0625 18:48:03.621721 2519 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:48:03.621906 kubelet[2519]: I0625 18:48:03.621877 2519 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:48:03.623189 kubelet[2519]: I0625 18:48:03.623162 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:48:03.624936 kubelet[2519]: I0625 18:48:03.624896 2519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:48:03.633762 kubelet[2519]: I0625 18:48:03.633726 2519 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:48:03.634047 kubelet[2519]: I0625 18:48:03.634024 2519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:48:03.634302 kubelet[2519]: I0625 18:48:03.634277 2519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:48:03.634382 kubelet[2519]: I0625 18:48:03.634311 2519 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:48:03.634382 kubelet[2519]: I0625 18:48:03.634324 2519 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:48:03.634382 kubelet[2519]: I0625 18:48:03.634360 2519 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:48:03.634493 kubelet[2519]: I0625 18:48:03.634470 2519 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:48:03.634520 kubelet[2519]: I0625 18:48:03.634493 2519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:48:03.634554 kubelet[2519]: I0625 18:48:03.634533 2519 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:48:03.634584 kubelet[2519]: I0625 18:48:03.634553 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:48:03.640092 kubelet[2519]: I0625 18:48:03.638998 2519 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:48:03.640092 kubelet[2519]: I0625 18:48:03.639215 2519 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:48:03.640092 kubelet[2519]: I0625 18:48:03.639686 2519 server.go:1256] "Started kubelet" Jun 25 18:48:03.640092 kubelet[2519]: I0625 18:48:03.639990 2519 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:48:03.641456 kubelet[2519]: I0625 18:48:03.641440 2519 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:48:03.644393 kubelet[2519]: I0625 18:48:03.642576 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:48:03.644629 kubelet[2519]: I0625 18:48:03.644616 2519 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:48:03.644711 kubelet[2519]: I0625 18:48:03.642917 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:48:03.645095 kubelet[2519]: I0625 18:48:03.645062 2519 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:48:03.646134 kubelet[2519]: I0625 18:48:03.645294 2519 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:48:03.646134 kubelet[2519]: I0625 18:48:03.645436 2519 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:48:03.649678 kubelet[2519]: I0625 18:48:03.649628 2519 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:48:03.650764 kubelet[2519]: I0625 18:48:03.650731 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:48:03.652774 kubelet[2519]: I0625 18:48:03.652740 2519 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:48:03.658916 kubelet[2519]: I0625 18:48:03.658277 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:48:03.660455 kubelet[2519]: I0625 18:48:03.660417 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:48:03.660455 kubelet[2519]: I0625 18:48:03.660442 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:48:03.660455 kubelet[2519]: I0625 18:48:03.660457 2519 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:48:03.660582 kubelet[2519]: E0625 18:48:03.660505 2519 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:48:03.691449 kubelet[2519]: I0625 18:48:03.690789 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:48:03.691449 kubelet[2519]: I0625 18:48:03.690812 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:48:03.691449 kubelet[2519]: I0625 18:48:03.690827 2519 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:48:03.691449 kubelet[2519]: I0625 18:48:03.691009 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:48:03.691449 kubelet[2519]: I0625 18:48:03.691029 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:48:03.691449 kubelet[2519]: I0625 18:48:03.691036 2519 policy_none.go:49] "None policy: Start" Jun 25 18:48:03.691794 kubelet[2519]: I0625 18:48:03.691620 2519 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:48:03.691794 kubelet[2519]: I0625 18:48:03.691672 2519 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:48:03.691890 kubelet[2519]: I0625 18:48:03.691873 2519 state_mem.go:75] "Updated machine memory state" Jun 25 18:48:03.696920 kubelet[2519]: I0625 18:48:03.696796 2519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:48:03.697167 kubelet[2519]: I0625 18:48:03.697017 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:48:03.749340 kubelet[2519]: I0625 18:48:03.749309 2519 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:48:03.755876 kubelet[2519]: I0625 18:48:03.755840 2519 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 18:48:03.755995 kubelet[2519]: I0625 18:48:03.755941 2519 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:48:03.760883 kubelet[2519]: I0625 18:48:03.760861 2519 topology_manager.go:215] "Topology Admit Handler" podUID="af4ef051c08c1b2f0254283ff389d666" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:48:03.761071 kubelet[2519]: I0625 18:48:03.761032 2519 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:48:03.761113 kubelet[2519]: I0625 18:48:03.761082 2519 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:48:03.946818 kubelet[2519]: I0625 18:48:03.946691 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4ef051c08c1b2f0254283ff389d666-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af4ef051c08c1b2f0254283ff389d666\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:48:03.946818 kubelet[2519]: I0625 18:48:03.946752 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:48:03.946818 kubelet[2519]: I0625 18:48:03.946780 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:48:03.946818 kubelet[2519]: I0625 18:48:03.946805 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:48:03.946818 kubelet[2519]: I0625 18:48:03.946829 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4ef051c08c1b2f0254283ff389d666-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af4ef051c08c1b2f0254283ff389d666\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:48:03.947070 kubelet[2519]: I0625 18:48:03.946855 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4ef051c08c1b2f0254283ff389d666-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af4ef051c08c1b2f0254283ff389d666\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:48:03.947070 kubelet[2519]: I0625 18:48:03.946878 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:48:03.947070 kubelet[2519]: I0625 18:48:03.946904 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:48:03.947070 kubelet[2519]: I0625 18:48:03.946936 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:48:04.072275 kubelet[2519]: E0625 18:48:04.071750 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:04.072275 kubelet[2519]: E0625 18:48:04.072155 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:04.072275 kubelet[2519]: E0625 18:48:04.072210 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:04.092335 sudo[2532]: pam_unix(sudo:session): session closed for user root Jun 25 18:48:04.635761 kubelet[2519]: I0625 18:48:04.635719 2519 apiserver.go:52] "Watching apiserver" Jun 25 18:48:04.646019 kubelet[2519]: I0625 18:48:04.645989 2519 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:48:04.674047 kubelet[2519]: E0625 18:48:04.674008 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:04.674224 kubelet[2519]: E0625 18:48:04.674204 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:04.680475 kubelet[2519]: E0625 18:48:04.680432 2519 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:48:04.681014 kubelet[2519]: E0625 18:48:04.680980 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:04.694309 kubelet[2519]: I0625 18:48:04.694249 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.694197193 podStartE2EDuration="1.694197193s" podCreationTimestamp="2024-06-25 18:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:04.693876236 +0000 UTC m=+1.328833543" watchObservedRunningTime="2024-06-25 18:48:04.694197193 +0000 UTC m=+1.329154500" Jun 25 18:48:04.701930 kubelet[2519]: I0625 18:48:04.701854 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7014172300000001 podStartE2EDuration="1.70141723s" podCreationTimestamp="2024-06-25 18:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:04.701362054 +0000 UTC m=+1.336319371" watchObservedRunningTime="2024-06-25 18:48:04.70141723 +0000 UTC m=+1.336374547" Jun 25 18:48:05.451895 sudo[1629]: pam_unix(sudo:session): session closed for user root Jun 25 18:48:05.454193 sshd[1626]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:05.459088 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:34088.service: Deactivated successfully. Jun 25 18:48:05.461144 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:48:05.461359 systemd[1]: session-7.scope: Consumed 4.124s CPU time, 139.5M memory peak, 0B memory swap peak. Jun 25 18:48:05.461914 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:48:05.462831 systemd-logind[1429]: Removed session 7. Jun 25 18:48:05.676402 kubelet[2519]: E0625 18:48:05.676366 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:10.710909 kubelet[2519]: E0625 18:48:10.710870 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:10.722509 kubelet[2519]: I0625 18:48:10.722452 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.722417179 podStartE2EDuration="7.722417179s" podCreationTimestamp="2024-06-25 18:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:04.708418488 +0000 UTC m=+1.343375795" watchObservedRunningTime="2024-06-25 18:48:10.722417179 +0000 UTC m=+7.357374486" Jun 25 18:48:11.027133 kubelet[2519]: E0625 18:48:11.027089 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:11.685382 kubelet[2519]: E0625 18:48:11.685312 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:11.685653 kubelet[2519]: E0625 18:48:11.685437 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:13.214216 kubelet[2519]: E0625 18:48:13.214157 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:13.690315 kubelet[2519]: E0625 18:48:13.689695 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:16.518809 update_engine[1431]: I0625 18:48:16.518748 1431 update_attempter.cc:509] Updating boot flags... Jun 25 18:48:16.546677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2605) Jun 25 18:48:16.580040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2608) Jun 25 18:48:16.622663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2608) Jun 25 18:48:16.751489 kubelet[2519]: I0625 18:48:16.751442 2519 topology_manager.go:215] "Topology Admit Handler" podUID="51bf4984-3dc5-4055-85fc-f034a725d28d" podNamespace="kube-system" podName="cilium-operator-5cc964979-gj994" Jun 25 18:48:16.755260 kubelet[2519]: I0625 18:48:16.754262 2519 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:48:16.755260 kubelet[2519]: I0625 18:48:16.754873 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:48:16.755445 containerd[1445]: time="2024-06-25T18:48:16.754707340Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:48:16.761611 systemd[1]: Created slice kubepods-besteffort-pod51bf4984_3dc5_4055_85fc_f034a725d28d.slice - libcontainer container kubepods-besteffort-pod51bf4984_3dc5_4055_85fc_f034a725d28d.slice. Jun 25 18:48:16.788226 kubelet[2519]: I0625 18:48:16.788185 2519 topology_manager.go:215] "Topology Admit Handler" podUID="7c74a4f6-1909-4b77-8a33-38b113ca6a54" podNamespace="kube-system" podName="kube-proxy-fr5j9" Jun 25 18:48:16.799075 kubelet[2519]: I0625 18:48:16.798271 2519 topology_manager.go:215] "Topology Admit Handler" podUID="78813f90-da93-423e-809d-14ef08c774f8" podNamespace="kube-system" podName="cilium-l7d8g" Jun 25 18:48:16.798894 systemd[1]: Created slice kubepods-besteffort-pod7c74a4f6_1909_4b77_8a33_38b113ca6a54.slice - libcontainer container kubepods-besteffort-pod7c74a4f6_1909_4b77_8a33_38b113ca6a54.slice. Jun 25 18:48:16.810946 systemd[1]: Created slice kubepods-burstable-pod78813f90_da93_423e_809d_14ef08c774f8.slice - libcontainer container kubepods-burstable-pod78813f90_da93_423e_809d_14ef08c774f8.slice. Jun 25 18:48:16.824800 kubelet[2519]: I0625 18:48:16.824754 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-xtables-lock\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.824800 kubelet[2519]: I0625 18:48:16.824792 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-hubble-tls\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.824969 kubelet[2519]: I0625 18:48:16.824816 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncfbm\" (UniqueName: \"kubernetes.io/projected/51bf4984-3dc5-4055-85fc-f034a725d28d-kube-api-access-ncfbm\") pod \"cilium-operator-5cc964979-gj994\" (UID: \"51bf4984-3dc5-4055-85fc-f034a725d28d\") " pod="kube-system/cilium-operator-5cc964979-gj994" Jun 25 18:48:16.824969 kubelet[2519]: I0625 18:48:16.824837 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-run\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.824969 kubelet[2519]: I0625 18:48:16.824859 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-lib-modules\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825039 kubelet[2519]: I0625 18:48:16.824963 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-net\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825039 kubelet[2519]: I0625 18:48:16.825022 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51bf4984-3dc5-4055-85fc-f034a725d28d-cilium-config-path\") pod \"cilium-operator-5cc964979-gj994\" (UID: \"51bf4984-3dc5-4055-85fc-f034a725d28d\") " pod="kube-system/cilium-operator-5cc964979-gj994" Jun 25 18:48:16.825091 kubelet[2519]: I0625 18:48:16.825069 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-hostproc\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825143 kubelet[2519]: I0625 18:48:16.825111 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-kernel\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825194 kubelet[2519]: I0625 18:48:16.825168 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-etc-cni-netd\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825221 kubelet[2519]: I0625 18:48:16.825200 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qrlc\" (UniqueName: \"kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-kube-api-access-7qrlc\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825246 kubelet[2519]: I0625 18:48:16.825223 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c74a4f6-1909-4b77-8a33-38b113ca6a54-lib-modules\") pod \"kube-proxy-fr5j9\" (UID: \"7c74a4f6-1909-4b77-8a33-38b113ca6a54\") " pod="kube-system/kube-proxy-fr5j9" Jun 25 18:48:16.825277 kubelet[2519]: I0625 18:48:16.825266 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-bpf-maps\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825315 kubelet[2519]: I0625 18:48:16.825302 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78813f90-da93-423e-809d-14ef08c774f8-cilium-config-path\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825353 kubelet[2519]: I0625 18:48:16.825343 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c74a4f6-1909-4b77-8a33-38b113ca6a54-kube-proxy\") pod \"kube-proxy-fr5j9\" (UID: \"7c74a4f6-1909-4b77-8a33-38b113ca6a54\") " pod="kube-system/kube-proxy-fr5j9" Jun 25 18:48:16.825375 kubelet[2519]: I0625 18:48:16.825370 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-cgroup\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825401 kubelet[2519]: I0625 18:48:16.825394 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cni-path\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:16.825442 kubelet[2519]: I0625 18:48:16.825421 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c74a4f6-1909-4b77-8a33-38b113ca6a54-xtables-lock\") pod \"kube-proxy-fr5j9\" (UID: \"7c74a4f6-1909-4b77-8a33-38b113ca6a54\") " pod="kube-system/kube-proxy-fr5j9" Jun 25 18:48:16.825469 kubelet[2519]: I0625 18:48:16.825452 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjgr\" (UniqueName: \"kubernetes.io/projected/7c74a4f6-1909-4b77-8a33-38b113ca6a54-kube-api-access-wtjgr\") pod \"kube-proxy-fr5j9\" (UID: \"7c74a4f6-1909-4b77-8a33-38b113ca6a54\") " pod="kube-system/kube-proxy-fr5j9" Jun 25 18:48:16.825495 kubelet[2519]: I0625 18:48:16.825477 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78813f90-da93-423e-809d-14ef08c774f8-clustermesh-secrets\") pod \"cilium-l7d8g\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " pod="kube-system/cilium-l7d8g" Jun 25 18:48:17.071135 kubelet[2519]: E0625 18:48:17.070997 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.071663 containerd[1445]: time="2024-06-25T18:48:17.071604588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gj994,Uid:51bf4984-3dc5-4055-85fc-f034a725d28d,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:17.103191 kubelet[2519]: E0625 18:48:17.102445 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.103348 containerd[1445]: time="2024-06-25T18:48:17.103240932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fr5j9,Uid:7c74a4f6-1909-4b77-8a33-38b113ca6a54,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:17.108294 containerd[1445]: time="2024-06-25T18:48:17.108197831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:17.108294 containerd[1445]: time="2024-06-25T18:48:17.108258836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:17.108494 containerd[1445]: time="2024-06-25T18:48:17.108276049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:17.108494 containerd[1445]: time="2024-06-25T18:48:17.108472351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:17.115471 kubelet[2519]: E0625 18:48:17.115424 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.116806 containerd[1445]: time="2024-06-25T18:48:17.116766451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l7d8g,Uid:78813f90-da93-423e-809d-14ef08c774f8,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:17.133258 containerd[1445]: time="2024-06-25T18:48:17.133102705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:17.133441 containerd[1445]: time="2024-06-25T18:48:17.133226088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:17.133441 containerd[1445]: time="2024-06-25T18:48:17.133248551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:17.133441 containerd[1445]: time="2024-06-25T18:48:17.133258410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:17.135918 systemd[1]: Started cri-containerd-f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8.scope - libcontainer container f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8. Jun 25 18:48:17.154443 containerd[1445]: time="2024-06-25T18:48:17.154290328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:17.154443 containerd[1445]: time="2024-06-25T18:48:17.154395066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:17.154443 containerd[1445]: time="2024-06-25T18:48:17.154418981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:17.154443 containerd[1445]: time="2024-06-25T18:48:17.154436274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:17.154897 systemd[1]: Started cri-containerd-189d2c920590905ab7cfa5f82428ac8f04cb656d3d96e2fc8a6b73314a89bb65.scope - libcontainer container 189d2c920590905ab7cfa5f82428ac8f04cb656d3d96e2fc8a6b73314a89bb65. Jun 25 18:48:17.179863 systemd[1]: Started cri-containerd-ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec.scope - libcontainer container ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec. Jun 25 18:48:17.191603 containerd[1445]: time="2024-06-25T18:48:17.191451166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fr5j9,Uid:7c74a4f6-1909-4b77-8a33-38b113ca6a54,Namespace:kube-system,Attempt:0,} returns sandbox id \"189d2c920590905ab7cfa5f82428ac8f04cb656d3d96e2fc8a6b73314a89bb65\"" Jun 25 18:48:17.192775 kubelet[2519]: E0625 18:48:17.192740 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.198379 containerd[1445]: time="2024-06-25T18:48:17.198306101Z" level=info msg="CreateContainer within sandbox \"189d2c920590905ab7cfa5f82428ac8f04cb656d3d96e2fc8a6b73314a89bb65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:48:17.201269 containerd[1445]: time="2024-06-25T18:48:17.201237773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gj994,Uid:51bf4984-3dc5-4055-85fc-f034a725d28d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8\"" Jun 25 18:48:17.204176 kubelet[2519]: E0625 18:48:17.203804 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.205925 containerd[1445]: time="2024-06-25T18:48:17.205858155Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:48:17.212520 containerd[1445]: time="2024-06-25T18:48:17.212466041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l7d8g,Uid:78813f90-da93-423e-809d-14ef08c774f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\"" Jun 25 18:48:17.213247 kubelet[2519]: E0625 18:48:17.213217 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.223918 containerd[1445]: time="2024-06-25T18:48:17.223863901Z" level=info msg="CreateContainer within sandbox \"189d2c920590905ab7cfa5f82428ac8f04cb656d3d96e2fc8a6b73314a89bb65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a6c8b99da7f56383824a32845a5d12e339077b83c19b8418e064468f631257c\"" Jun 25 18:48:17.224402 containerd[1445]: time="2024-06-25T18:48:17.224361474Z" level=info msg="StartContainer for \"0a6c8b99da7f56383824a32845a5d12e339077b83c19b8418e064468f631257c\"" Jun 25 18:48:17.253820 systemd[1]: Started cri-containerd-0a6c8b99da7f56383824a32845a5d12e339077b83c19b8418e064468f631257c.scope - libcontainer container 0a6c8b99da7f56383824a32845a5d12e339077b83c19b8418e064468f631257c. Jun 25 18:48:17.286951 containerd[1445]: time="2024-06-25T18:48:17.286886175Z" level=info msg="StartContainer for \"0a6c8b99da7f56383824a32845a5d12e339077b83c19b8418e064468f631257c\" returns successfully" Jun 25 18:48:17.698801 kubelet[2519]: E0625 18:48:17.698755 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:17.706759 kubelet[2519]: I0625 18:48:17.706721 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fr5j9" podStartSLOduration=1.706683999 podStartE2EDuration="1.706683999s" podCreationTimestamp="2024-06-25 18:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:17.706572317 +0000 UTC m=+14.341529624" watchObservedRunningTime="2024-06-25 18:48:17.706683999 +0000 UTC m=+14.341641306" Jun 25 18:48:18.759131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199374845.mount: Deactivated successfully. Jun 25 18:48:19.070119 containerd[1445]: time="2024-06-25T18:48:19.070040716Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:19.070924 containerd[1445]: time="2024-06-25T18:48:19.070844265Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Jun 25 18:48:19.072062 containerd[1445]: time="2024-06-25T18:48:19.072011253Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:19.073452 containerd[1445]: time="2024-06-25T18:48:19.073402645Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.867491781s" Jun 25 18:48:19.073452 containerd[1445]: time="2024-06-25T18:48:19.073449314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 25 18:48:19.074273 containerd[1445]: time="2024-06-25T18:48:19.074237404Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:48:19.075557 containerd[1445]: time="2024-06-25T18:48:19.075527014Z" level=info msg="CreateContainer within sandbox \"f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:48:19.092938 containerd[1445]: time="2024-06-25T18:48:19.092875034Z" level=info msg="CreateContainer within sandbox \"f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\"" Jun 25 18:48:19.093633 containerd[1445]: time="2024-06-25T18:48:19.093600056Z" level=info msg="StartContainer for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\"" Jun 25 18:48:19.093876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615581144.mount: Deactivated successfully. Jun 25 18:48:19.128929 systemd[1]: Started cri-containerd-ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc.scope - libcontainer container ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc. Jun 25 18:48:19.160253 containerd[1445]: time="2024-06-25T18:48:19.160194631Z" level=info msg="StartContainer for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" returns successfully" Jun 25 18:48:19.705243 kubelet[2519]: E0625 18:48:19.704828 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:20.732346 kubelet[2519]: E0625 18:48:20.732295 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:26.441921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919973830.mount: Deactivated successfully. Jun 25 18:48:30.323706 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:42178.service - OpenSSH per-connection server daemon (10.0.0.1:42178). Jun 25 18:48:30.426194 sshd[2982]: Accepted publickey for core from 10.0.0.1 port 42178 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:30.427811 sshd[2982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:30.432029 systemd-logind[1429]: New session 8 of user core. Jun 25 18:48:30.440764 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:48:30.528414 containerd[1445]: time="2024-06-25T18:48:30.528320655Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:30.531572 containerd[1445]: time="2024-06-25T18:48:30.531378063Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735347" Jun 25 18:48:30.533966 containerd[1445]: time="2024-06-25T18:48:30.533568618Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:48:30.535399 containerd[1445]: time="2024-06-25T18:48:30.535368197Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.461083944s" Jun 25 18:48:30.535504 containerd[1445]: time="2024-06-25T18:48:30.535403914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 25 18:48:30.561457 containerd[1445]: time="2024-06-25T18:48:30.560797147Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:48:30.646875 containerd[1445]: time="2024-06-25T18:48:30.646583728Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\"" Jun 25 18:48:30.647544 containerd[1445]: time="2024-06-25T18:48:30.647128473Z" level=info msg="StartContainer for \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\"" Jun 25 18:48:30.680884 systemd[1]: Started cri-containerd-cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b.scope - libcontainer container cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b. Jun 25 18:48:30.717830 containerd[1445]: time="2024-06-25T18:48:30.717779689Z" level=info msg="StartContainer for \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\" returns successfully" Jun 25 18:48:30.729427 systemd[1]: cri-containerd-cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b.scope: Deactivated successfully. Jun 25 18:48:30.757624 sshd[2982]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:30.758109 kubelet[2519]: E0625 18:48:30.758092 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:30.761775 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:48:30.763136 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:42178.service: Deactivated successfully. Jun 25 18:48:30.766304 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:48:30.771829 systemd-logind[1429]: Removed session 8. Jun 25 18:48:30.801151 kubelet[2519]: I0625 18:48:30.795573 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gj994" podStartSLOduration=12.924919355 podStartE2EDuration="14.793826987s" podCreationTimestamp="2024-06-25 18:48:16 +0000 UTC" firstStartedPulling="2024-06-25 18:48:17.20500021 +0000 UTC m=+13.839957517" lastFinishedPulling="2024-06-25 18:48:19.073907841 +0000 UTC m=+15.708865149" observedRunningTime="2024-06-25 18:48:19.72513555 +0000 UTC m=+16.360092857" watchObservedRunningTime="2024-06-25 18:48:30.793826987 +0000 UTC m=+27.428784295" Jun 25 18:48:31.622353 containerd[1445]: time="2024-06-25T18:48:31.622225472Z" level=info msg="shim disconnected" id=cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b namespace=k8s.io Jun 25 18:48:31.622353 containerd[1445]: time="2024-06-25T18:48:31.622288832Z" level=warning msg="cleaning up after shim disconnected" id=cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b namespace=k8s.io Jun 25 18:48:31.622353 containerd[1445]: time="2024-06-25T18:48:31.622297679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:31.635807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b-rootfs.mount: Deactivated successfully. Jun 25 18:48:31.760221 kubelet[2519]: E0625 18:48:31.760176 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:31.762244 containerd[1445]: time="2024-06-25T18:48:31.762199009Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:48:31.778140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435033503.mount: Deactivated successfully. Jun 25 18:48:31.779792 containerd[1445]: time="2024-06-25T18:48:31.779733332Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\"" Jun 25 18:48:31.780283 containerd[1445]: time="2024-06-25T18:48:31.780235167Z" level=info msg="StartContainer for \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\"" Jun 25 18:48:31.808942 systemd[1]: Started cri-containerd-b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f.scope - libcontainer container b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f. Jun 25 18:48:31.855933 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:48:31.856837 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:48:31.856936 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:48:31.864026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:48:31.864319 systemd[1]: cri-containerd-b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f.scope: Deactivated successfully. Jun 25 18:48:31.880183 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:48:31.893713 containerd[1445]: time="2024-06-25T18:48:31.893624843Z" level=info msg="StartContainer for \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\" returns successfully" Jun 25 18:48:31.949951 containerd[1445]: time="2024-06-25T18:48:31.949895016Z" level=info msg="shim disconnected" id=b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f namespace=k8s.io Jun 25 18:48:31.949951 containerd[1445]: time="2024-06-25T18:48:31.949944630Z" level=warning msg="cleaning up after shim disconnected" id=b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f namespace=k8s.io Jun 25 18:48:31.949951 containerd[1445]: time="2024-06-25T18:48:31.949952805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:32.635297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f-rootfs.mount: Deactivated successfully. Jun 25 18:48:32.763580 kubelet[2519]: E0625 18:48:32.763546 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:32.766480 containerd[1445]: time="2024-06-25T18:48:32.766310882Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:48:32.802554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861448364.mount: Deactivated successfully. Jun 25 18:48:32.804284 containerd[1445]: time="2024-06-25T18:48:32.804234454Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\"" Jun 25 18:48:32.804807 containerd[1445]: time="2024-06-25T18:48:32.804756707Z" level=info msg="StartContainer for \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\"" Jun 25 18:48:32.833896 systemd[1]: Started cri-containerd-f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111.scope - libcontainer container f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111. Jun 25 18:48:32.864227 containerd[1445]: time="2024-06-25T18:48:32.864123372Z" level=info msg="StartContainer for \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\" returns successfully" Jun 25 18:48:32.864813 systemd[1]: cri-containerd-f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111.scope: Deactivated successfully. Jun 25 18:48:32.892934 containerd[1445]: time="2024-06-25T18:48:32.892309827Z" level=info msg="shim disconnected" id=f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111 namespace=k8s.io Jun 25 18:48:32.892934 containerd[1445]: time="2024-06-25T18:48:32.892392803Z" level=warning msg="cleaning up after shim disconnected" id=f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111 namespace=k8s.io Jun 25 18:48:32.892934 containerd[1445]: time="2024-06-25T18:48:32.892405657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:33.635381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111-rootfs.mount: Deactivated successfully. Jun 25 18:48:33.765957 kubelet[2519]: E0625 18:48:33.765918 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:33.767566 containerd[1445]: time="2024-06-25T18:48:33.767516750Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:48:33.783600 containerd[1445]: time="2024-06-25T18:48:33.783552036Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\"" Jun 25 18:48:33.784061 containerd[1445]: time="2024-06-25T18:48:33.784035455Z" level=info msg="StartContainer for \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\"" Jun 25 18:48:33.813784 systemd[1]: Started cri-containerd-ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8.scope - libcontainer container ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8. Jun 25 18:48:33.837114 systemd[1]: cri-containerd-ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8.scope: Deactivated successfully. Jun 25 18:48:33.838909 containerd[1445]: time="2024-06-25T18:48:33.838755007Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78813f90_da93_423e_809d_14ef08c774f8.slice/cri-containerd-ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8.scope/memory.events\": no such file or directory" Jun 25 18:48:33.913916 containerd[1445]: time="2024-06-25T18:48:33.913805299Z" level=info msg="StartContainer for \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\" returns successfully" Jun 25 18:48:33.938435 containerd[1445]: time="2024-06-25T18:48:33.938366008Z" level=info msg="shim disconnected" id=ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8 namespace=k8s.io Jun 25 18:48:33.938435 containerd[1445]: time="2024-06-25T18:48:33.938428296Z" level=warning msg="cleaning up after shim disconnected" id=ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8 namespace=k8s.io Jun 25 18:48:33.938435 containerd[1445]: time="2024-06-25T18:48:33.938438174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:48:34.635282 systemd[1]: run-containerd-runc-k8s.io-ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8-runc.rVQ8mp.mount: Deactivated successfully. Jun 25 18:48:34.635393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8-rootfs.mount: Deactivated successfully. Jun 25 18:48:34.769153 kubelet[2519]: E0625 18:48:34.768844 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:34.770708 containerd[1445]: time="2024-06-25T18:48:34.770616447Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:48:34.907364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965839259.mount: Deactivated successfully. Jun 25 18:48:34.912930 containerd[1445]: time="2024-06-25T18:48:34.912867948Z" level=info msg="CreateContainer within sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\"" Jun 25 18:48:34.914205 containerd[1445]: time="2024-06-25T18:48:34.914167924Z" level=info msg="StartContainer for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\"" Jun 25 18:48:34.951872 systemd[1]: Started cri-containerd-477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85.scope - libcontainer container 477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85. Jun 25 18:48:35.020369 containerd[1445]: time="2024-06-25T18:48:35.020311785Z" level=info msg="StartContainer for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" returns successfully" Jun 25 18:48:35.183185 kubelet[2519]: I0625 18:48:35.183058 2519 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:48:35.228470 kubelet[2519]: I0625 18:48:35.227843 2519 topology_manager.go:215] "Topology Admit Handler" podUID="6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5" podNamespace="kube-system" podName="coredns-76f75df574-2gz2z" Jun 25 18:48:35.231744 kubelet[2519]: I0625 18:48:35.231164 2519 topology_manager.go:215] "Topology Admit Handler" podUID="b4115d96-5a89-409b-91ce-f310bc45304d" podNamespace="kube-system" podName="coredns-76f75df574-wqw79" Jun 25 18:48:35.239315 systemd[1]: Created slice kubepods-burstable-pod6a5a9b8d_9b32_4f82_9dbc_9f4b0932d2a5.slice - libcontainer container kubepods-burstable-pod6a5a9b8d_9b32_4f82_9dbc_9f4b0932d2a5.slice. Jun 25 18:48:35.245444 systemd[1]: Created slice kubepods-burstable-podb4115d96_5a89_409b_91ce_f310bc45304d.slice - libcontainer container kubepods-burstable-podb4115d96_5a89_409b_91ce_f310bc45304d.slice. Jun 25 18:48:35.347892 kubelet[2519]: I0625 18:48:35.347857 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5-config-volume\") pod \"coredns-76f75df574-2gz2z\" (UID: \"6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5\") " pod="kube-system/coredns-76f75df574-2gz2z" Jun 25 18:48:35.347892 kubelet[2519]: I0625 18:48:35.347896 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgfb\" (UniqueName: \"kubernetes.io/projected/6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5-kube-api-access-6dgfb\") pod \"coredns-76f75df574-2gz2z\" (UID: \"6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5\") " pod="kube-system/coredns-76f75df574-2gz2z" Jun 25 18:48:35.347892 kubelet[2519]: I0625 18:48:35.347917 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nsl\" (UniqueName: \"kubernetes.io/projected/b4115d96-5a89-409b-91ce-f310bc45304d-kube-api-access-h5nsl\") pod \"coredns-76f75df574-wqw79\" (UID: \"b4115d96-5a89-409b-91ce-f310bc45304d\") " pod="kube-system/coredns-76f75df574-wqw79" Jun 25 18:48:35.348163 kubelet[2519]: I0625 18:48:35.347937 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4115d96-5a89-409b-91ce-f310bc45304d-config-volume\") pod \"coredns-76f75df574-wqw79\" (UID: \"b4115d96-5a89-409b-91ce-f310bc45304d\") " pod="kube-system/coredns-76f75df574-wqw79" Jun 25 18:48:35.544300 kubelet[2519]: E0625 18:48:35.544270 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:35.544920 containerd[1445]: time="2024-06-25T18:48:35.544880616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2gz2z,Uid:6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:35.548354 kubelet[2519]: E0625 18:48:35.548334 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:35.548687 containerd[1445]: time="2024-06-25T18:48:35.548660256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wqw79,Uid:b4115d96-5a89-409b-91ce-f310bc45304d,Namespace:kube-system,Attempt:0,}" Jun 25 18:48:35.777841 kubelet[2519]: E0625 18:48:35.777360 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:35.780266 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:42180.service - OpenSSH per-connection server daemon (10.0.0.1:42180). Jun 25 18:48:35.794810 kubelet[2519]: I0625 18:48:35.794634 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-l7d8g" podStartSLOduration=6.471297987 podStartE2EDuration="19.794583617s" podCreationTimestamp="2024-06-25 18:48:16 +0000 UTC" firstStartedPulling="2024-06-25 18:48:17.214118912 +0000 UTC m=+13.849076219" lastFinishedPulling="2024-06-25 18:48:30.537404542 +0000 UTC m=+27.172361849" observedRunningTime="2024-06-25 18:48:35.794252283 +0000 UTC m=+32.429209610" watchObservedRunningTime="2024-06-25 18:48:35.794583617 +0000 UTC m=+32.429540924" Jun 25 18:48:35.831557 sshd[3382]: Accepted publickey for core from 10.0.0.1 port 42180 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:35.833192 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:35.837676 systemd-logind[1429]: New session 9 of user core. Jun 25 18:48:35.847778 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:48:36.008460 sshd[3382]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:36.016378 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:42180.service: Deactivated successfully. Jun 25 18:48:36.018339 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:48:36.019313 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:48:36.022068 systemd-logind[1429]: Removed session 9. Jun 25 18:48:36.779128 kubelet[2519]: E0625 18:48:36.779065 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:37.194826 systemd-networkd[1389]: cilium_host: Link UP Jun 25 18:48:37.195040 systemd-networkd[1389]: cilium_net: Link UP Jun 25 18:48:37.195345 systemd-networkd[1389]: cilium_net: Gained carrier Jun 25 18:48:37.195577 systemd-networkd[1389]: cilium_host: Gained carrier Jun 25 18:48:37.198928 systemd-networkd[1389]: cilium_net: Gained IPv6LL Jun 25 18:48:37.200819 systemd-networkd[1389]: cilium_host: Gained IPv6LL Jun 25 18:48:37.315916 systemd-networkd[1389]: cilium_vxlan: Link UP Jun 25 18:48:37.315930 systemd-networkd[1389]: cilium_vxlan: Gained carrier Jun 25 18:48:37.559702 kernel: NET: Registered PF_ALG protocol family Jun 25 18:48:37.780786 kubelet[2519]: E0625 18:48:37.780747 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:38.274017 systemd-networkd[1389]: lxc_health: Link UP Jun 25 18:48:38.286562 systemd-networkd[1389]: lxc_health: Gained carrier Jun 25 18:48:38.698553 systemd-networkd[1389]: lxcc8d1f02b5e1d: Link UP Jun 25 18:48:38.706307 systemd-networkd[1389]: lxc56317fa928de: Link UP Jun 25 18:48:38.719667 kernel: eth0: renamed from tmp76e98 Jun 25 18:48:38.756669 kernel: eth0: renamed from tmp46593 Jun 25 18:48:38.772909 systemd-networkd[1389]: lxcc8d1f02b5e1d: Gained carrier Jun 25 18:48:38.773213 systemd-networkd[1389]: lxc56317fa928de: Gained carrier Jun 25 18:48:38.854830 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL Jun 25 18:48:39.118481 kubelet[2519]: E0625 18:48:39.118369 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:39.809217 kubelet[2519]: E0625 18:48:39.809184 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:40.006880 systemd-networkd[1389]: lxc_health: Gained IPv6LL Jun 25 18:48:40.007277 systemd-networkd[1389]: lxc56317fa928de: Gained IPv6LL Jun 25 18:48:40.582826 systemd-networkd[1389]: lxcc8d1f02b5e1d: Gained IPv6LL Jun 25 18:48:40.811343 kubelet[2519]: E0625 18:48:40.811300 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:41.030044 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Jun 25 18:48:41.068405 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:41.069267 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:41.074214 systemd-logind[1429]: New session 10 of user core. Jun 25 18:48:41.083851 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:48:41.259883 sshd[3777]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:41.264511 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:34394.service: Deactivated successfully. Jun 25 18:48:41.266595 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:48:41.267328 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:48:41.268678 systemd-logind[1429]: Removed session 10. Jun 25 18:48:42.422140 containerd[1445]: time="2024-06-25T18:48:42.422047992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:42.422140 containerd[1445]: time="2024-06-25T18:48:42.422102534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:42.422140 containerd[1445]: time="2024-06-25T18:48:42.422122302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:42.422140 containerd[1445]: time="2024-06-25T18:48:42.422134876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:42.426583 containerd[1445]: time="2024-06-25T18:48:42.426240910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:48:42.426583 containerd[1445]: time="2024-06-25T18:48:42.426293017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:42.426793 containerd[1445]: time="2024-06-25T18:48:42.426327964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:48:42.426793 containerd[1445]: time="2024-06-25T18:48:42.426345507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:48:42.445796 systemd[1]: Started cri-containerd-76e9831407ffaed04f15373c55b7c7fc7d09ad0b3f99953a205c912a68208b6e.scope - libcontainer container 76e9831407ffaed04f15373c55b7c7fc7d09ad0b3f99953a205c912a68208b6e. Jun 25 18:48:42.450904 systemd[1]: Started cri-containerd-4659304463c07b606ed6aec83bd587934a718b62265b348f0b68a42691ce8cfb.scope - libcontainer container 4659304463c07b606ed6aec83bd587934a718b62265b348f0b68a42691ce8cfb. Jun 25 18:48:42.460951 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:48:42.463773 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:48:42.490723 containerd[1445]: time="2024-06-25T18:48:42.490652869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wqw79,Uid:b4115d96-5a89-409b-91ce-f310bc45304d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4659304463c07b606ed6aec83bd587934a718b62265b348f0b68a42691ce8cfb\"" Jun 25 18:48:42.492240 kubelet[2519]: E0625 18:48:42.491930 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:42.492624 containerd[1445]: time="2024-06-25T18:48:42.491944465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2gz2z,Uid:6a5a9b8d-9b32-4f82-9dbc-9f4b0932d2a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"76e9831407ffaed04f15373c55b7c7fc7d09ad0b3f99953a205c912a68208b6e\"" Jun 25 18:48:42.493612 kubelet[2519]: E0625 18:48:42.493561 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:42.494968 containerd[1445]: time="2024-06-25T18:48:42.494864762Z" level=info msg="CreateContainer within sandbox \"4659304463c07b606ed6aec83bd587934a718b62265b348f0b68a42691ce8cfb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:48:42.495851 containerd[1445]: time="2024-06-25T18:48:42.495818894Z" level=info msg="CreateContainer within sandbox \"76e9831407ffaed04f15373c55b7c7fc7d09ad0b3f99953a205c912a68208b6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:48:42.580262 containerd[1445]: time="2024-06-25T18:48:42.580197576Z" level=info msg="CreateContainer within sandbox \"76e9831407ffaed04f15373c55b7c7fc7d09ad0b3f99953a205c912a68208b6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"397c34d8232cca3122f0f554fea2a6d8af7a3f7a834f02094e034d7bb24479d3\"" Jun 25 18:48:42.581188 containerd[1445]: time="2024-06-25T18:48:42.580727742Z" level=info msg="StartContainer for \"397c34d8232cca3122f0f554fea2a6d8af7a3f7a834f02094e034d7bb24479d3\"" Jun 25 18:48:42.590136 containerd[1445]: time="2024-06-25T18:48:42.590086796Z" level=info msg="CreateContainer within sandbox \"4659304463c07b606ed6aec83bd587934a718b62265b348f0b68a42691ce8cfb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a62b6bf353a010f8a1f4aeb66a052d7cc7efa71b528d60ecd4ed0d2d4d44332\"" Jun 25 18:48:42.591333 containerd[1445]: time="2024-06-25T18:48:42.590743390Z" level=info msg="StartContainer for \"9a62b6bf353a010f8a1f4aeb66a052d7cc7efa71b528d60ecd4ed0d2d4d44332\"" Jun 25 18:48:42.607769 systemd[1]: Started cri-containerd-397c34d8232cca3122f0f554fea2a6d8af7a3f7a834f02094e034d7bb24479d3.scope - libcontainer container 397c34d8232cca3122f0f554fea2a6d8af7a3f7a834f02094e034d7bb24479d3. Jun 25 18:48:42.627775 systemd[1]: Started cri-containerd-9a62b6bf353a010f8a1f4aeb66a052d7cc7efa71b528d60ecd4ed0d2d4d44332.scope - libcontainer container 9a62b6bf353a010f8a1f4aeb66a052d7cc7efa71b528d60ecd4ed0d2d4d44332. Jun 25 18:48:42.650034 containerd[1445]: time="2024-06-25T18:48:42.649925723Z" level=info msg="StartContainer for \"397c34d8232cca3122f0f554fea2a6d8af7a3f7a834f02094e034d7bb24479d3\" returns successfully" Jun 25 18:48:42.657900 containerd[1445]: time="2024-06-25T18:48:42.657705158Z" level=info msg="StartContainer for \"9a62b6bf353a010f8a1f4aeb66a052d7cc7efa71b528d60ecd4ed0d2d4d44332\" returns successfully" Jun 25 18:48:42.815950 kubelet[2519]: E0625 18:48:42.815919 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:42.818918 kubelet[2519]: E0625 18:48:42.818878 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:42.834394 kubelet[2519]: I0625 18:48:42.833237 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2gz2z" podStartSLOduration=26.833196234 podStartE2EDuration="26.833196234s" podCreationTimestamp="2024-06-25 18:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:42.825649976 +0000 UTC m=+39.460607283" watchObservedRunningTime="2024-06-25 18:48:42.833196234 +0000 UTC m=+39.468153541" Jun 25 18:48:42.842844 kubelet[2519]: I0625 18:48:42.842788 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wqw79" podStartSLOduration=26.84274775 podStartE2EDuration="26.84274775s" podCreationTimestamp="2024-06-25 18:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:48:42.842399896 +0000 UTC m=+39.477357203" watchObservedRunningTime="2024-06-25 18:48:42.84274775 +0000 UTC m=+39.477705057" Jun 25 18:48:43.428264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750154189.mount: Deactivated successfully. Jun 25 18:48:43.820519 kubelet[2519]: E0625 18:48:43.820485 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:43.821164 kubelet[2519]: E0625 18:48:43.820685 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:44.822142 kubelet[2519]: E0625 18:48:44.822112 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:44.822594 kubelet[2519]: E0625 18:48:44.822334 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:48:46.273933 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:45238.service - OpenSSH per-connection server daemon (10.0.0.1:45238). Jun 25 18:48:46.314769 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 45238 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:46.317161 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:46.321479 systemd-logind[1429]: New session 11 of user core. Jun 25 18:48:46.327760 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:48:46.531709 sshd[3966]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:46.541849 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:45238.service: Deactivated successfully. Jun 25 18:48:46.543756 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:48:46.545476 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:48:46.547472 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:45248.service - OpenSSH per-connection server daemon (10.0.0.1:45248). Jun 25 18:48:46.548592 systemd-logind[1429]: Removed session 11. Jun 25 18:48:46.580760 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 45248 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:46.582150 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:46.586357 systemd-logind[1429]: New session 12 of user core. Jun 25 18:48:46.595788 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:48:46.868715 sshd[3981]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:46.879436 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:45248.service: Deactivated successfully. Jun 25 18:48:46.881193 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:48:46.882662 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:48:46.883832 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:45262.service - OpenSSH per-connection server daemon (10.0.0.1:45262). Jun 25 18:48:46.884574 systemd-logind[1429]: Removed session 12. Jun 25 18:48:46.917038 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 45262 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:46.918920 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:46.923863 systemd-logind[1429]: New session 13 of user core. Jun 25 18:48:46.934959 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:48:47.093962 sshd[3993]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:47.098428 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:45262.service: Deactivated successfully. Jun 25 18:48:47.100400 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:48:47.101064 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:48:47.101906 systemd-logind[1429]: Removed session 13. Jun 25 18:48:52.114570 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:45266.service - OpenSSH per-connection server daemon (10.0.0.1:45266). Jun 25 18:48:52.154125 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 45266 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:52.156047 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:52.160657 systemd-logind[1429]: New session 14 of user core. Jun 25 18:48:52.169863 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:48:52.297404 sshd[4010]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:52.302519 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:45266.service: Deactivated successfully. Jun 25 18:48:52.304788 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:48:52.305543 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:48:52.306630 systemd-logind[1429]: Removed session 14. Jun 25 18:48:57.317162 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:56692.service - OpenSSH per-connection server daemon (10.0.0.1:56692). Jun 25 18:48:57.354012 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 56692 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:57.355614 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:57.360034 systemd-logind[1429]: New session 15 of user core. Jun 25 18:48:57.367788 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:48:57.505842 sshd[4025]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:57.518027 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:56692.service: Deactivated successfully. Jun 25 18:48:57.520425 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:48:57.522420 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:48:57.531027 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:56700.service - OpenSSH per-connection server daemon (10.0.0.1:56700). Jun 25 18:48:57.532114 systemd-logind[1429]: Removed session 15. Jun 25 18:48:57.562844 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:57.564604 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:57.568827 systemd-logind[1429]: New session 16 of user core. Jun 25 18:48:57.579788 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:48:58.164218 sshd[4039]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:58.180284 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:56700.service: Deactivated successfully. Jun 25 18:48:58.182178 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:48:58.183972 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:48:58.195234 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:56706.service - OpenSSH per-connection server daemon (10.0.0.1:56706). Jun 25 18:48:58.196253 systemd-logind[1429]: Removed session 16. Jun 25 18:48:58.228261 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 56706 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:58.230123 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:58.234658 systemd-logind[1429]: New session 17 of user core. Jun 25 18:48:58.244781 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:48:59.715090 sshd[4051]: pam_unix(sshd:session): session closed for user core Jun 25 18:48:59.722692 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:56706.service: Deactivated successfully. Jun 25 18:48:59.724708 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:48:59.727357 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:48:59.733296 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:56720.service - OpenSSH per-connection server daemon (10.0.0.1:56720). Jun 25 18:48:59.735038 systemd-logind[1429]: Removed session 17. Jun 25 18:48:59.766192 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 56720 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:48:59.768112 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:48:59.772978 systemd-logind[1429]: New session 18 of user core. Jun 25 18:48:59.779798 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:49:00.025840 sshd[4074]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:00.036087 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:56720.service: Deactivated successfully. Jun 25 18:49:00.038333 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:49:00.040127 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:49:00.044952 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:56724.service - OpenSSH per-connection server daemon (10.0.0.1:56724). Jun 25 18:49:00.046010 systemd-logind[1429]: Removed session 18. Jun 25 18:49:00.076597 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 56724 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:00.078282 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:00.082679 systemd-logind[1429]: New session 19 of user core. Jun 25 18:49:00.091808 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:49:00.199249 sshd[4086]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:00.203458 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:56724.service: Deactivated successfully. Jun 25 18:49:00.206174 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:49:00.206938 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:49:00.208208 systemd-logind[1429]: Removed session 19. Jun 25 18:49:05.210805 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:56732.service - OpenSSH per-connection server daemon (10.0.0.1:56732). Jun 25 18:49:05.246972 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 56732 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:05.248654 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:05.253066 systemd-logind[1429]: New session 20 of user core. Jun 25 18:49:05.263894 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:49:05.378423 sshd[4103]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:05.381859 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:56732.service: Deactivated successfully. Jun 25 18:49:05.383677 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:49:05.384250 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:49:05.385071 systemd-logind[1429]: Removed session 20. Jun 25 18:49:10.389517 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:50504.service - OpenSSH per-connection server daemon (10.0.0.1:50504). Jun 25 18:49:10.421593 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 50504 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:10.422867 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:10.426285 systemd-logind[1429]: New session 21 of user core. Jun 25 18:49:10.441785 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:49:10.542405 sshd[4120]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:10.546031 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:50504.service: Deactivated successfully. Jun 25 18:49:10.547907 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:49:10.548542 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:49:10.549420 systemd-logind[1429]: Removed session 21. Jun 25 18:49:15.553743 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:50506.service - OpenSSH per-connection server daemon (10.0.0.1:50506). Jun 25 18:49:15.585861 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 50506 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:15.587403 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:15.591067 systemd-logind[1429]: New session 22 of user core. Jun 25 18:49:15.603785 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:49:15.708463 sshd[4134]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:15.712620 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:50506.service: Deactivated successfully. Jun 25 18:49:15.714384 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:49:15.715123 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:49:15.715955 systemd-logind[1429]: Removed session 22. Jun 25 18:49:20.720791 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:43858.service - OpenSSH per-connection server daemon (10.0.0.1:43858). Jun 25 18:49:20.753747 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 43858 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:20.755271 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:20.759110 systemd-logind[1429]: New session 23 of user core. Jun 25 18:49:20.769798 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:49:20.873079 sshd[4150]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:20.886006 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:43858.service: Deactivated successfully. Jun 25 18:49:20.888208 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:49:20.890330 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:49:20.900957 systemd[1]: Started sshd@23-10.0.0.148:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). Jun 25 18:49:20.902088 systemd-logind[1429]: Removed session 23. Jun 25 18:49:20.930191 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:20.931571 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:20.935328 systemd-logind[1429]: New session 24 of user core. Jun 25 18:49:20.947774 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:49:21.661475 kubelet[2519]: E0625 18:49:21.661431 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:22.268038 containerd[1445]: time="2024-06-25T18:49:22.268000395Z" level=info msg="StopContainer for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" with timeout 30 (s)" Jun 25 18:49:22.284609 containerd[1445]: time="2024-06-25T18:49:22.284562810Z" level=info msg="Stop container \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" with signal terminated" Jun 25 18:49:22.300899 systemd[1]: cri-containerd-ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc.scope: Deactivated successfully. Jun 25 18:49:22.321290 containerd[1445]: time="2024-06-25T18:49:22.320533323Z" level=info msg="StopContainer for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" with timeout 2 (s)" Jun 25 18:49:22.321746 containerd[1445]: time="2024-06-25T18:49:22.321669456Z" level=info msg="Stop container \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" with signal terminated" Jun 25 18:49:22.323333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc-rootfs.mount: Deactivated successfully. Jun 25 18:49:22.326373 containerd[1445]: time="2024-06-25T18:49:22.326304684Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:49:22.330443 systemd-networkd[1389]: lxc_health: Link DOWN Jun 25 18:49:22.330450 systemd-networkd[1389]: lxc_health: Lost carrier Jun 25 18:49:22.340174 containerd[1445]: time="2024-06-25T18:49:22.340122759Z" level=info msg="shim disconnected" id=ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc namespace=k8s.io Jun 25 18:49:22.340321 containerd[1445]: time="2024-06-25T18:49:22.340175189Z" level=warning msg="cleaning up after shim disconnected" id=ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc namespace=k8s.io Jun 25 18:49:22.340321 containerd[1445]: time="2024-06-25T18:49:22.340186731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:22.351174 systemd[1]: cri-containerd-477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85.scope: Deactivated successfully. Jun 25 18:49:22.351462 systemd[1]: cri-containerd-477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85.scope: Consumed 7.183s CPU time. Jun 25 18:49:22.367294 containerd[1445]: time="2024-06-25T18:49:22.367252008Z" level=info msg="StopContainer for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" returns successfully" Jun 25 18:49:22.371895 containerd[1445]: time="2024-06-25T18:49:22.371857088Z" level=info msg="StopPodSandbox for \"f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8\"" Jun 25 18:49:22.372207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85-rootfs.mount: Deactivated successfully. Jun 25 18:49:22.374411 containerd[1445]: time="2024-06-25T18:49:22.371907264Z" level=info msg="Container to stop \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:49:22.376093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8-shm.mount: Deactivated successfully. Jun 25 18:49:22.377930 containerd[1445]: time="2024-06-25T18:49:22.377879269Z" level=info msg="shim disconnected" id=477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85 namespace=k8s.io Jun 25 18:49:22.377930 containerd[1445]: time="2024-06-25T18:49:22.377926409Z" level=warning msg="cleaning up after shim disconnected" id=477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85 namespace=k8s.io Jun 25 18:49:22.378027 containerd[1445]: time="2024-06-25T18:49:22.377934474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:22.382052 systemd[1]: cri-containerd-f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8.scope: Deactivated successfully. Jun 25 18:49:22.394942 containerd[1445]: time="2024-06-25T18:49:22.394820979Z" level=info msg="StopContainer for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" returns successfully" Jun 25 18:49:22.395321 containerd[1445]: time="2024-06-25T18:49:22.395278274Z" level=info msg="StopPodSandbox for \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\"" Jun 25 18:49:22.395364 containerd[1445]: time="2024-06-25T18:49:22.395318531Z" level=info msg="Container to stop \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:49:22.395364 containerd[1445]: time="2024-06-25T18:49:22.395358157Z" level=info msg="Container to stop \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:49:22.395415 containerd[1445]: time="2024-06-25T18:49:22.395369849Z" level=info msg="Container to stop \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:49:22.395415 containerd[1445]: time="2024-06-25T18:49:22.395381792Z" level=info msg="Container to stop \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:49:22.395415 containerd[1445]: time="2024-06-25T18:49:22.395392112Z" level=info msg="Container to stop \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:49:22.397437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec-shm.mount: Deactivated successfully. Jun 25 18:49:22.403393 systemd[1]: cri-containerd-ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec.scope: Deactivated successfully. Jun 25 18:49:22.407174 containerd[1445]: time="2024-06-25T18:49:22.407109949Z" level=info msg="shim disconnected" id=f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8 namespace=k8s.io Jun 25 18:49:22.407174 containerd[1445]: time="2024-06-25T18:49:22.407166607Z" level=warning msg="cleaning up after shim disconnected" id=f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8 namespace=k8s.io Jun 25 18:49:22.407295 containerd[1445]: time="2024-06-25T18:49:22.407178259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:22.420892 containerd[1445]: time="2024-06-25T18:49:22.420845645Z" level=info msg="TearDown network for sandbox \"f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8\" successfully" Jun 25 18:49:22.420892 containerd[1445]: time="2024-06-25T18:49:22.420878559Z" level=info msg="StopPodSandbox for \"f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8\" returns successfully" Jun 25 18:49:22.438680 containerd[1445]: time="2024-06-25T18:49:22.438579180Z" level=info msg="shim disconnected" id=ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec namespace=k8s.io Jun 25 18:49:22.438680 containerd[1445]: time="2024-06-25T18:49:22.438667731Z" level=warning msg="cleaning up after shim disconnected" id=ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec namespace=k8s.io Jun 25 18:49:22.438680 containerd[1445]: time="2024-06-25T18:49:22.438680815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:22.454466 containerd[1445]: time="2024-06-25T18:49:22.454418202Z" level=info msg="TearDown network for sandbox \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" successfully" Jun 25 18:49:22.454466 containerd[1445]: time="2024-06-25T18:49:22.454459000Z" level=info msg="StopPodSandbox for \"ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec\" returns successfully" Jun 25 18:49:22.513281 kubelet[2519]: I0625 18:49:22.513256 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-xtables-lock\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513281 kubelet[2519]: I0625 18:49:22.513286 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-lib-modules\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513438 kubelet[2519]: I0625 18:49:22.513305 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-bpf-maps\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513438 kubelet[2519]: I0625 18:49:22.513324 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-net\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513438 kubelet[2519]: I0625 18:49:22.513349 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-hubble-tls\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513438 kubelet[2519]: I0625 18:49:22.513369 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78813f90-da93-423e-809d-14ef08c774f8-cilium-config-path\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513438 kubelet[2519]: I0625 18:49:22.513386 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cni-path\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513438 kubelet[2519]: I0625 18:49:22.513402 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-hostproc\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513581 kubelet[2519]: I0625 18:49:22.513424 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78813f90-da93-423e-809d-14ef08c774f8-clustermesh-secrets\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513581 kubelet[2519]: I0625 18:49:22.513411 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.513581 kubelet[2519]: I0625 18:49:22.513459 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.513581 kubelet[2519]: I0625 18:49:22.513411 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.513581 kubelet[2519]: I0625 18:49:22.513440 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-run\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513737 kubelet[2519]: I0625 18:49:22.513482 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.513737 kubelet[2519]: I0625 18:49:22.513498 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.513737 kubelet[2519]: I0625 18:49:22.513524 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncfbm\" (UniqueName: \"kubernetes.io/projected/51bf4984-3dc5-4055-85fc-f034a725d28d-kube-api-access-ncfbm\") pod \"51bf4984-3dc5-4055-85fc-f034a725d28d\" (UID: \"51bf4984-3dc5-4055-85fc-f034a725d28d\") " Jun 25 18:49:22.513737 kubelet[2519]: I0625 18:49:22.513557 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51bf4984-3dc5-4055-85fc-f034a725d28d-cilium-config-path\") pod \"51bf4984-3dc5-4055-85fc-f034a725d28d\" (UID: \"51bf4984-3dc5-4055-85fc-f034a725d28d\") " Jun 25 18:49:22.513737 kubelet[2519]: I0625 18:49:22.513581 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-kernel\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513853 kubelet[2519]: I0625 18:49:22.513605 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-cgroup\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513853 kubelet[2519]: I0625 18:49:22.513631 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-etc-cni-netd\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513853 kubelet[2519]: I0625 18:49:22.513695 2519 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qrlc\" (UniqueName: \"kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-kube-api-access-7qrlc\") pod \"78813f90-da93-423e-809d-14ef08c774f8\" (UID: \"78813f90-da93-423e-809d-14ef08c774f8\") " Jun 25 18:49:22.513853 kubelet[2519]: I0625 18:49:22.513776 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.514504 kubelet[2519]: I0625 18:49:22.514329 2519 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.514504 kubelet[2519]: I0625 18:49:22.514396 2519 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.514504 kubelet[2519]: I0625 18:49:22.514411 2519 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.514504 kubelet[2519]: I0625 18:49:22.514423 2519 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.514504 kubelet[2519]: I0625 18:49:22.514435 2519 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.515704 kubelet[2519]: I0625 18:49:22.515672 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.515786 kubelet[2519]: I0625 18:49:22.515763 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.517230 kubelet[2519]: I0625 18:49:22.517193 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bf4984-3dc5-4055-85fc-f034a725d28d-kube-api-access-ncfbm" (OuterVolumeSpecName: "kube-api-access-ncfbm") pod "51bf4984-3dc5-4055-85fc-f034a725d28d" (UID: "51bf4984-3dc5-4055-85fc-f034a725d28d"). InnerVolumeSpecName "kube-api-access-ncfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:49:22.517328 kubelet[2519]: I0625 18:49:22.517248 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.517328 kubelet[2519]: I0625 18:49:22.517265 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:49:22.519331 kubelet[2519]: I0625 18:49:22.519214 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78813f90-da93-423e-809d-14ef08c774f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:49:22.519981 kubelet[2519]: I0625 18:49:22.519511 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:49:22.519981 kubelet[2519]: I0625 18:49:22.519792 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78813f90-da93-423e-809d-14ef08c774f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:49:22.519981 kubelet[2519]: I0625 18:49:22.519976 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bf4984-3dc5-4055-85fc-f034a725d28d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51bf4984-3dc5-4055-85fc-f034a725d28d" (UID: "51bf4984-3dc5-4055-85fc-f034a725d28d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:49:22.520718 kubelet[2519]: I0625 18:49:22.520620 2519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-kube-api-access-7qrlc" (OuterVolumeSpecName: "kube-api-access-7qrlc") pod "78813f90-da93-423e-809d-14ef08c774f8" (UID: "78813f90-da93-423e-809d-14ef08c774f8"). InnerVolumeSpecName "kube-api-access-7qrlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:49:22.614824 kubelet[2519]: I0625 18:49:22.614786 2519 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78813f90-da93-423e-809d-14ef08c774f8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614824 kubelet[2519]: I0625 18:49:22.614818 2519 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614824 kubelet[2519]: I0625 18:49:22.614830 2519 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ncfbm\" (UniqueName: \"kubernetes.io/projected/51bf4984-3dc5-4055-85fc-f034a725d28d-kube-api-access-ncfbm\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614841 2519 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51bf4984-3dc5-4055-85fc-f034a725d28d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614851 2519 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614860 2519 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614871 2519 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7qrlc\" (UniqueName: \"kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-kube-api-access-7qrlc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614879 2519 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614893 2519 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78813f90-da93-423e-809d-14ef08c774f8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614902 2519 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78813f90-da93-423e-809d-14ef08c774f8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.614963 kubelet[2519]: I0625 18:49:22.614911 2519 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78813f90-da93-423e-809d-14ef08c774f8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:49:22.661782 kubelet[2519]: E0625 18:49:22.661747 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:22.888773 kubelet[2519]: I0625 18:49:22.888739 2519 scope.go:117] "RemoveContainer" containerID="477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85" Jun 25 18:49:22.891251 containerd[1445]: time="2024-06-25T18:49:22.891199878Z" level=info msg="RemoveContainer for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\"" Jun 25 18:49:22.896122 systemd[1]: Removed slice kubepods-burstable-pod78813f90_da93_423e_809d_14ef08c774f8.slice - libcontainer container kubepods-burstable-pod78813f90_da93_423e_809d_14ef08c774f8.slice. Jun 25 18:49:22.896252 systemd[1]: kubepods-burstable-pod78813f90_da93_423e_809d_14ef08c774f8.slice: Consumed 7.290s CPU time. Jun 25 18:49:22.897615 containerd[1445]: time="2024-06-25T18:49:22.897579764Z" level=info msg="RemoveContainer for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" returns successfully" Jun 25 18:49:22.897808 kubelet[2519]: I0625 18:49:22.897773 2519 scope.go:117] "RemoveContainer" containerID="ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8" Jun 25 18:49:22.898837 systemd[1]: Removed slice kubepods-besteffort-pod51bf4984_3dc5_4055_85fc_f034a725d28d.slice - libcontainer container kubepods-besteffort-pod51bf4984_3dc5_4055_85fc_f034a725d28d.slice. Jun 25 18:49:22.899044 containerd[1445]: time="2024-06-25T18:49:22.898990061Z" level=info msg="RemoveContainer for \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\"" Jun 25 18:49:22.907773 containerd[1445]: time="2024-06-25T18:49:22.907712748Z" level=info msg="RemoveContainer for \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\" returns successfully" Jun 25 18:49:22.908064 kubelet[2519]: I0625 18:49:22.908024 2519 scope.go:117] "RemoveContainer" containerID="f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111" Jun 25 18:49:22.909051 containerd[1445]: time="2024-06-25T18:49:22.909023937Z" level=info msg="RemoveContainer for \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\"" Jun 25 18:49:22.917139 containerd[1445]: time="2024-06-25T18:49:22.917088053Z" level=info msg="RemoveContainer for \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\" returns successfully" Jun 25 18:49:22.917364 kubelet[2519]: I0625 18:49:22.917336 2519 scope.go:117] "RemoveContainer" containerID="b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f" Jun 25 18:49:22.918587 containerd[1445]: time="2024-06-25T18:49:22.918329439Z" level=info msg="RemoveContainer for \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\"" Jun 25 18:49:22.921753 containerd[1445]: time="2024-06-25T18:49:22.921699876Z" level=info msg="RemoveContainer for \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\" returns successfully" Jun 25 18:49:22.921949 kubelet[2519]: I0625 18:49:22.921923 2519 scope.go:117] "RemoveContainer" containerID="cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b" Jun 25 18:49:22.922780 containerd[1445]: time="2024-06-25T18:49:22.922744405Z" level=info msg="RemoveContainer for \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\"" Jun 25 18:49:22.925778 containerd[1445]: time="2024-06-25T18:49:22.925748552Z" level=info msg="RemoveContainer for \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\" returns successfully" Jun 25 18:49:22.925930 kubelet[2519]: I0625 18:49:22.925893 2519 scope.go:117] "RemoveContainer" containerID="477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85" Jun 25 18:49:22.926129 containerd[1445]: time="2024-06-25T18:49:22.926062933Z" level=error msg="ContainerStatus for \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\": not found" Jun 25 18:49:22.932261 kubelet[2519]: E0625 18:49:22.932235 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\": not found" containerID="477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85" Jun 25 18:49:22.932332 kubelet[2519]: I0625 18:49:22.932318 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85"} err="failed to get container status \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\": rpc error: code = NotFound desc = an error occurred when try to find container \"477dc14bcdb526c67f0830d6f92dd735d58acfa94028d3f05a8d9a3070230d85\": not found" Jun 25 18:49:22.932332 kubelet[2519]: I0625 18:49:22.932332 2519 scope.go:117] "RemoveContainer" containerID="ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8" Jun 25 18:49:22.932515 containerd[1445]: time="2024-06-25T18:49:22.932483476Z" level=error msg="ContainerStatus for \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\": not found" Jun 25 18:49:22.932603 kubelet[2519]: E0625 18:49:22.932586 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\": not found" containerID="ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8" Jun 25 18:49:22.932633 kubelet[2519]: I0625 18:49:22.932615 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8"} err="failed to get container status \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce5606729d1f86482a6c669096c03bc6558a06601c3fc5f163dba2a638a8d5c8\": not found" Jun 25 18:49:22.932633 kubelet[2519]: I0625 18:49:22.932626 2519 scope.go:117] "RemoveContainer" containerID="f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111" Jun 25 18:49:22.932797 containerd[1445]: time="2024-06-25T18:49:22.932776726Z" level=error msg="ContainerStatus for \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\": not found" Jun 25 18:49:22.932919 kubelet[2519]: E0625 18:49:22.932902 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\": not found" containerID="f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111" Jun 25 18:49:22.932970 kubelet[2519]: I0625 18:49:22.932940 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111"} err="failed to get container status \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\": rpc error: code = NotFound desc = an error occurred when try to find container \"f606aa28996745e01ce49f08120c8e99834f367f186bdb19224bfb7d1a97d111\": not found" Jun 25 18:49:22.932970 kubelet[2519]: I0625 18:49:22.932952 2519 scope.go:117] "RemoveContainer" containerID="b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f" Jun 25 18:49:22.933138 containerd[1445]: time="2024-06-25T18:49:22.933094325Z" level=error msg="ContainerStatus for \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\": not found" Jun 25 18:49:22.933223 kubelet[2519]: E0625 18:49:22.933211 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\": not found" containerID="b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f" Jun 25 18:49:22.933255 kubelet[2519]: I0625 18:49:22.933230 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f"} err="failed to get container status \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0b506f23f6c3fd31262de8e0dd4bcd9623dcd5c0f385ffaa73425792925e61f\": not found" Jun 25 18:49:22.933255 kubelet[2519]: I0625 18:49:22.933238 2519 scope.go:117] "RemoveContainer" containerID="cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b" Jun 25 18:49:22.933436 containerd[1445]: time="2024-06-25T18:49:22.933394118Z" level=error msg="ContainerStatus for \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\": not found" Jun 25 18:49:22.933542 kubelet[2519]: E0625 18:49:22.933521 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\": not found" containerID="cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b" Jun 25 18:49:22.933542 kubelet[2519]: I0625 18:49:22.933543 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b"} err="failed to get container status \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdae7fc36aa0844593bae1b95f89adb637265c65db35bc0337bbdc494e19357b\": not found" Jun 25 18:49:22.933622 kubelet[2519]: I0625 18:49:22.933554 2519 scope.go:117] "RemoveContainer" containerID="ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc" Jun 25 18:49:22.934266 containerd[1445]: time="2024-06-25T18:49:22.934211922Z" level=info msg="RemoveContainer for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\"" Jun 25 18:49:22.937333 containerd[1445]: time="2024-06-25T18:49:22.937299559Z" level=info msg="RemoveContainer for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" returns successfully" Jun 25 18:49:22.937450 kubelet[2519]: I0625 18:49:22.937424 2519 scope.go:117] "RemoveContainer" containerID="ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc" Jun 25 18:49:22.937585 containerd[1445]: time="2024-06-25T18:49:22.937552763Z" level=error msg="ContainerStatus for \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\": not found" Jun 25 18:49:22.937723 kubelet[2519]: E0625 18:49:22.937700 2519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\": not found" containerID="ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc" Jun 25 18:49:22.937767 kubelet[2519]: I0625 18:49:22.937737 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc"} err="failed to get container status \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca5eb987708108948658a225a8c92fd0908130603b1ead79cdfd19fab46ba7fc\": not found" Jun 25 18:49:23.300192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff9b6df44a41f0788c926ba125a42869a3d544ef92655c0d77d1b776b3d6edec-rootfs.mount: Deactivated successfully. Jun 25 18:49:23.300315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2e12cb480f368741a772c231b44719e578a08bf9627daef76941a57bab0e1c8-rootfs.mount: Deactivated successfully. Jun 25 18:49:23.300387 systemd[1]: var-lib-kubelet-pods-78813f90\x2dda93\x2d423e\x2d809d\x2d14ef08c774f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7qrlc.mount: Deactivated successfully. Jun 25 18:49:23.300464 systemd[1]: var-lib-kubelet-pods-51bf4984\x2d3dc5\x2d4055\x2d85fc\x2df034a725d28d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncfbm.mount: Deactivated successfully. Jun 25 18:49:23.300546 systemd[1]: var-lib-kubelet-pods-78813f90\x2dda93\x2d423e\x2d809d\x2d14ef08c774f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:49:23.300625 systemd[1]: var-lib-kubelet-pods-78813f90\x2dda93\x2d423e\x2d809d\x2d14ef08c774f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:49:23.663562 kubelet[2519]: I0625 18:49:23.663444 2519 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="51bf4984-3dc5-4055-85fc-f034a725d28d" path="/var/lib/kubelet/pods/51bf4984-3dc5-4055-85fc-f034a725d28d/volumes" Jun 25 18:49:23.664088 kubelet[2519]: I0625 18:49:23.664062 2519 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="78813f90-da93-423e-809d-14ef08c774f8" path="/var/lib/kubelet/pods/78813f90-da93-423e-809d-14ef08c774f8/volumes" Jun 25 18:49:23.720819 kubelet[2519]: E0625 18:49:23.720791 2519 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:49:24.242470 sshd[4164]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:24.253349 systemd[1]: sshd@23-10.0.0.148:22-10.0.0.1:43874.service: Deactivated successfully. Jun 25 18:49:24.255069 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:49:24.256418 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:49:24.261095 systemd[1]: Started sshd@24-10.0.0.148:22-10.0.0.1:43880.service - OpenSSH per-connection server daemon (10.0.0.1:43880). Jun 25 18:49:24.261888 systemd-logind[1429]: Removed session 24. Jun 25 18:49:24.289823 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 43880 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:24.291234 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:24.294847 systemd-logind[1429]: New session 25 of user core. Jun 25 18:49:24.301750 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:49:24.714672 sshd[4324]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:24.725712 systemd[1]: sshd@24-10.0.0.148:22-10.0.0.1:43880.service: Deactivated successfully. Jun 25 18:49:24.728436 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:49:24.734539 kubelet[2519]: I0625 18:49:24.732621 2519 topology_manager.go:215] "Topology Admit Handler" podUID="4427b7a8-a870-4fa2-8b91-28b8a559edf3" podNamespace="kube-system" podName="cilium-gz9c9" Jun 25 18:49:24.734539 kubelet[2519]: E0625 18:49:24.732691 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78813f90-da93-423e-809d-14ef08c774f8" containerName="apply-sysctl-overwrites" Jun 25 18:49:24.734539 kubelet[2519]: E0625 18:49:24.732700 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78813f90-da93-423e-809d-14ef08c774f8" containerName="clean-cilium-state" Jun 25 18:49:24.734539 kubelet[2519]: E0625 18:49:24.732708 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78813f90-da93-423e-809d-14ef08c774f8" containerName="cilium-agent" Jun 25 18:49:24.734539 kubelet[2519]: E0625 18:49:24.732716 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51bf4984-3dc5-4055-85fc-f034a725d28d" containerName="cilium-operator" Jun 25 18:49:24.734539 kubelet[2519]: E0625 18:49:24.732723 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78813f90-da93-423e-809d-14ef08c774f8" containerName="mount-cgroup" Jun 25 18:49:24.734539 kubelet[2519]: E0625 18:49:24.732730 2519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78813f90-da93-423e-809d-14ef08c774f8" containerName="mount-bpf-fs" Jun 25 18:49:24.734539 kubelet[2519]: I0625 18:49:24.732759 2519 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bf4984-3dc5-4055-85fc-f034a725d28d" containerName="cilium-operator" Jun 25 18:49:24.734539 kubelet[2519]: I0625 18:49:24.732769 2519 memory_manager.go:354] "RemoveStaleState removing state" podUID="78813f90-da93-423e-809d-14ef08c774f8" containerName="cilium-agent" Jun 25 18:49:24.732798 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:49:24.751063 systemd[1]: Started sshd@25-10.0.0.148:22-10.0.0.1:43882.service - OpenSSH per-connection server daemon (10.0.0.1:43882). Jun 25 18:49:24.752222 systemd-logind[1429]: Removed session 25. Jun 25 18:49:24.756574 systemd[1]: Created slice kubepods-burstable-pod4427b7a8_a870_4fa2_8b91_28b8a559edf3.slice - libcontainer container kubepods-burstable-pod4427b7a8_a870_4fa2_8b91_28b8a559edf3.slice. Jun 25 18:49:24.779741 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 43882 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:24.781157 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:24.784676 systemd-logind[1429]: New session 26 of user core. Jun 25 18:49:24.793779 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:49:24.826609 kubelet[2519]: I0625 18:49:24.826583 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4427b7a8-a870-4fa2-8b91-28b8a559edf3-cilium-config-path\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826609 kubelet[2519]: I0625 18:49:24.826620 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-cilium-cgroup\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826728 kubelet[2519]: I0625 18:49:24.826660 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-lib-modules\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826728 kubelet[2519]: I0625 18:49:24.826680 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-xtables-lock\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826785 kubelet[2519]: I0625 18:49:24.826738 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-host-proc-sys-kernel\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826846 kubelet[2519]: I0625 18:49:24.826822 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-etc-cni-netd\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826958 kubelet[2519]: I0625 18:49:24.826937 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-host-proc-sys-net\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.826958 kubelet[2519]: I0625 18:49:24.826963 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4427b7a8-a870-4fa2-8b91-28b8a559edf3-hubble-tls\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827084 kubelet[2519]: I0625 18:49:24.826981 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-cni-path\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827084 kubelet[2519]: I0625 18:49:24.827001 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4427b7a8-a870-4fa2-8b91-28b8a559edf3-clustermesh-secrets\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827084 kubelet[2519]: I0625 18:49:24.827019 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4427b7a8-a870-4fa2-8b91-28b8a559edf3-cilium-ipsec-secrets\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827084 kubelet[2519]: I0625 18:49:24.827050 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-hostproc\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827084 kubelet[2519]: I0625 18:49:24.827077 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-cilium-run\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827202 kubelet[2519]: I0625 18:49:24.827111 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4427b7a8-a870-4fa2-8b91-28b8a559edf3-bpf-maps\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.827202 kubelet[2519]: I0625 18:49:24.827134 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqz9\" (UniqueName: \"kubernetes.io/projected/4427b7a8-a870-4fa2-8b91-28b8a559edf3-kube-api-access-fpqz9\") pod \"cilium-gz9c9\" (UID: \"4427b7a8-a870-4fa2-8b91-28b8a559edf3\") " pod="kube-system/cilium-gz9c9" Jun 25 18:49:24.843663 sshd[4337]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:24.855345 systemd[1]: sshd@25-10.0.0.148:22-10.0.0.1:43882.service: Deactivated successfully. Jun 25 18:49:24.856996 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:49:24.858744 systemd-logind[1429]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:49:24.866883 systemd[1]: Started sshd@26-10.0.0.148:22-10.0.0.1:43892.service - OpenSSH per-connection server daemon (10.0.0.1:43892). Jun 25 18:49:24.867695 systemd-logind[1429]: Removed session 26. Jun 25 18:49:24.895285 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 43892 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:49:24.896877 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:49:24.900522 systemd-logind[1429]: New session 27 of user core. Jun 25 18:49:24.906767 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:49:25.059867 kubelet[2519]: E0625 18:49:25.059574 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:25.060127 containerd[1445]: time="2024-06-25T18:49:25.060085035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz9c9,Uid:4427b7a8-a870-4fa2-8b91-28b8a559edf3,Namespace:kube-system,Attempt:0,}" Jun 25 18:49:25.080705 containerd[1445]: time="2024-06-25T18:49:25.080621362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:49:25.080705 containerd[1445]: time="2024-06-25T18:49:25.080688440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:25.080811 containerd[1445]: time="2024-06-25T18:49:25.080712005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:49:25.080811 containerd[1445]: time="2024-06-25T18:49:25.080728697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:49:25.099778 systemd[1]: Started cri-containerd-2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0.scope - libcontainer container 2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0. Jun 25 18:49:25.105845 kubelet[2519]: I0625 18:49:25.105814 2519 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T18:49:25Z","lastTransitionTime":"2024-06-25T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 18:49:25.124131 containerd[1445]: time="2024-06-25T18:49:25.124083241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz9c9,Uid:4427b7a8-a870-4fa2-8b91-28b8a559edf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\"" Jun 25 18:49:25.125126 kubelet[2519]: E0625 18:49:25.125085 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:25.127450 containerd[1445]: time="2024-06-25T18:49:25.127403794Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:49:25.141213 containerd[1445]: time="2024-06-25T18:49:25.141177624Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828\"" Jun 25 18:49:25.141704 containerd[1445]: time="2024-06-25T18:49:25.141675586Z" level=info msg="StartContainer for \"eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828\"" Jun 25 18:49:25.172794 systemd[1]: Started cri-containerd-eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828.scope - libcontainer container eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828. Jun 25 18:49:25.196447 containerd[1445]: time="2024-06-25T18:49:25.196370190Z" level=info msg="StartContainer for \"eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828\" returns successfully" Jun 25 18:49:25.205376 systemd[1]: cri-containerd-eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828.scope: Deactivated successfully. Jun 25 18:49:25.234957 containerd[1445]: time="2024-06-25T18:49:25.234893706Z" level=info msg="shim disconnected" id=eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828 namespace=k8s.io Jun 25 18:49:25.234957 containerd[1445]: time="2024-06-25T18:49:25.234947710Z" level=warning msg="cleaning up after shim disconnected" id=eaaa72a90a06b17bea9f3408b1b37ba955c39c99bde251120c7c5656349b4828 namespace=k8s.io Jun 25 18:49:25.234957 containerd[1445]: time="2024-06-25T18:49:25.234957078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:25.899185 kubelet[2519]: E0625 18:49:25.899154 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:25.901117 containerd[1445]: time="2024-06-25T18:49:25.901079649Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:49:25.912735 containerd[1445]: time="2024-06-25T18:49:25.912684104Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3\"" Jun 25 18:49:25.913438 containerd[1445]: time="2024-06-25T18:49:25.913155014Z" level=info msg="StartContainer for \"d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3\"" Jun 25 18:49:25.947774 systemd[1]: Started cri-containerd-d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3.scope - libcontainer container d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3. Jun 25 18:49:25.970740 containerd[1445]: time="2024-06-25T18:49:25.970693923Z" level=info msg="StartContainer for \"d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3\" returns successfully" Jun 25 18:49:25.978213 systemd[1]: cri-containerd-d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3.scope: Deactivated successfully. Jun 25 18:49:25.996454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3-rootfs.mount: Deactivated successfully. Jun 25 18:49:26.001004 containerd[1445]: time="2024-06-25T18:49:26.000936821Z" level=info msg="shim disconnected" id=d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3 namespace=k8s.io Jun 25 18:49:26.001004 containerd[1445]: time="2024-06-25T18:49:26.000992358Z" level=warning msg="cleaning up after shim disconnected" id=d06560b45a7038cc35ed09313cc860ef3407d30ff7d9604c8eff7fe94bc93bd3 namespace=k8s.io Jun 25 18:49:26.001004 containerd[1445]: time="2024-06-25T18:49:26.001002897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:26.911691 kubelet[2519]: E0625 18:49:26.911614 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:26.914353 containerd[1445]: time="2024-06-25T18:49:26.914307939Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:49:26.939902 containerd[1445]: time="2024-06-25T18:49:26.939847040Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080\"" Jun 25 18:49:26.940491 containerd[1445]: time="2024-06-25T18:49:26.940458207Z" level=info msg="StartContainer for \"6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080\"" Jun 25 18:49:26.968765 systemd[1]: Started cri-containerd-6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080.scope - libcontainer container 6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080. Jun 25 18:49:26.997717 containerd[1445]: time="2024-06-25T18:49:26.997660942Z" level=info msg="StartContainer for \"6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080\" returns successfully" Jun 25 18:49:26.997847 systemd[1]: cri-containerd-6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080.scope: Deactivated successfully. Jun 25 18:49:27.018447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080-rootfs.mount: Deactivated successfully. Jun 25 18:49:27.026456 containerd[1445]: time="2024-06-25T18:49:27.026374361Z" level=info msg="shim disconnected" id=6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080 namespace=k8s.io Jun 25 18:49:27.026456 containerd[1445]: time="2024-06-25T18:49:27.026430328Z" level=warning msg="cleaning up after shim disconnected" id=6444bc1ef895c905637cbcbce963cc7ee23e644b4b85e558909e72283a84f080 namespace=k8s.io Jun 25 18:49:27.026456 containerd[1445]: time="2024-06-25T18:49:27.026440296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:27.661278 kubelet[2519]: E0625 18:49:27.661235 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:27.915474 kubelet[2519]: E0625 18:49:27.915353 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:27.917895 containerd[1445]: time="2024-06-25T18:49:27.917716769Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:49:27.931258 containerd[1445]: time="2024-06-25T18:49:27.931213837Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854\"" Jun 25 18:49:27.931668 containerd[1445]: time="2024-06-25T18:49:27.931613249Z" level=info msg="StartContainer for \"1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854\"" Jun 25 18:49:27.962854 systemd[1]: Started cri-containerd-1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854.scope - libcontainer container 1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854. Jun 25 18:49:27.986063 systemd[1]: cri-containerd-1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854.scope: Deactivated successfully. Jun 25 18:49:27.987989 containerd[1445]: time="2024-06-25T18:49:27.987949419Z" level=info msg="StartContainer for \"1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854\" returns successfully" Jun 25 18:49:28.007977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854-rootfs.mount: Deactivated successfully. Jun 25 18:49:28.011178 containerd[1445]: time="2024-06-25T18:49:28.011119608Z" level=info msg="shim disconnected" id=1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854 namespace=k8s.io Jun 25 18:49:28.011178 containerd[1445]: time="2024-06-25T18:49:28.011171076Z" level=warning msg="cleaning up after shim disconnected" id=1ad8c8545cbe6061045bfcb4f50cf90be9e18d228b4c5ed7d811d93d3d1bb854 namespace=k8s.io Jun 25 18:49:28.011178 containerd[1445]: time="2024-06-25T18:49:28.011180043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:49:28.721570 kubelet[2519]: E0625 18:49:28.721521 2519 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:49:28.919112 kubelet[2519]: E0625 18:49:28.919086 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:28.921331 containerd[1445]: time="2024-06-25T18:49:28.921282621Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:49:28.936045 containerd[1445]: time="2024-06-25T18:49:28.935991288Z" level=info msg="CreateContainer within sandbox \"2c343cdabbec88a09b5c23dc46b940a4e7f56d797ef338efbfaf1816efe4d0c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1bbd2e40724ef5c3995cac20add4f9ca8f35b0ba96224b5ebd19d037b5804158\"" Jun 25 18:49:28.936472 containerd[1445]: time="2024-06-25T18:49:28.936433031Z" level=info msg="StartContainer for \"1bbd2e40724ef5c3995cac20add4f9ca8f35b0ba96224b5ebd19d037b5804158\"" Jun 25 18:49:28.969770 systemd[1]: Started cri-containerd-1bbd2e40724ef5c3995cac20add4f9ca8f35b0ba96224b5ebd19d037b5804158.scope - libcontainer container 1bbd2e40724ef5c3995cac20add4f9ca8f35b0ba96224b5ebd19d037b5804158. Jun 25 18:49:28.999197 containerd[1445]: time="2024-06-25T18:49:28.999088428Z" level=info msg="StartContainer for \"1bbd2e40724ef5c3995cac20add4f9ca8f35b0ba96224b5ebd19d037b5804158\" returns successfully" Jun 25 18:49:29.400677 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 25 18:49:29.922897 kubelet[2519]: E0625 18:49:29.922857 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:29.933590 kubelet[2519]: I0625 18:49:29.933558 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gz9c9" podStartSLOduration=5.933520627 podStartE2EDuration="5.933520627s" podCreationTimestamp="2024-06-25 18:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:49:29.933004995 +0000 UTC m=+86.567962302" watchObservedRunningTime="2024-06-25 18:49:29.933520627 +0000 UTC m=+86.568477934" Jun 25 18:49:31.061537 kubelet[2519]: E0625 18:49:31.061460 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:31.662206 kubelet[2519]: E0625 18:49:31.662172 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:32.347925 systemd-networkd[1389]: lxc_health: Link UP Jun 25 18:49:32.357424 systemd-networkd[1389]: lxc_health: Gained carrier Jun 25 18:49:33.060951 kubelet[2519]: E0625 18:49:33.060912 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:33.446848 systemd-networkd[1389]: lxc_health: Gained IPv6LL Jun 25 18:49:33.929536 kubelet[2519]: E0625 18:49:33.929515 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:34.931454 kubelet[2519]: E0625 18:49:34.931277 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:49:37.482084 sshd[4345]: pam_unix(sshd:session): session closed for user core Jun 25 18:49:37.485698 systemd-logind[1429]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:49:37.488057 systemd[1]: sshd@26-10.0.0.148:22-10.0.0.1:43892.service: Deactivated successfully. Jun 25 18:49:37.491298 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:49:37.493010 systemd-logind[1429]: Removed session 27.