Jul 2 00:11:54.891251 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:11:54.891291 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:11:54.891307 kernel: BIOS-provided physical RAM map: Jul 2 00:11:54.891316 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:11:54.891324 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 00:11:54.891332 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 00:11:54.891343 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 00:11:54.891352 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 00:11:54.891360 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 00:11:54.891369 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 00:11:54.891380 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 00:11:54.891406 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 2 00:11:54.891414 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 2 00:11:54.891428 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 2 00:11:54.891450 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 00:11:54.891463 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 00:11:54.891472 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 00:11:54.891480 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 00:11:54.891489 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 00:11:54.891497 kernel: NX (Execute Disable) protection: active Jul 2 00:11:54.891505 kernel: APIC: Static calls initialized Jul 2 00:11:54.891514 kernel: efi: EFI v2.7 by EDK II Jul 2 00:11:54.891523 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4f9018 Jul 2 00:11:54.891532 kernel: SMBIOS 2.8 present. Jul 2 00:11:54.891541 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 00:11:54.891551 kernel: Hypervisor detected: KVM Jul 2 00:11:54.891560 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:11:54.891573 kernel: kvm-clock: using sched offset of 4511376549 cycles Jul 2 00:11:54.891583 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:11:54.891593 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:11:54.891609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:11:54.891619 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:11:54.891634 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 00:11:54.891644 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:11:54.891654 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:11:54.891670 kernel: Using GB pages for direct mapping Jul 2 00:11:54.891687 kernel: Secure boot disabled Jul 2 00:11:54.891697 kernel: ACPI: Early table checksum verification disabled Jul 2 00:11:54.891707 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 00:11:54.891717 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:11:54.891732 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:54.891743 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:54.891755 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 00:11:54.891766 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:54.891776 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:54.891787 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:11:54.891813 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 00:11:54.891824 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 00:11:54.891834 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 00:11:54.891844 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 00:11:54.891858 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 00:11:54.891868 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 00:11:54.891878 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 00:11:54.891895 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 00:11:54.891906 kernel: No NUMA configuration found Jul 2 00:11:54.891916 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 00:11:54.891927 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 00:11:54.891937 kernel: Zone ranges: Jul 2 00:11:54.891947 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:11:54.891961 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 00:11:54.891980 kernel: Normal empty Jul 2 00:11:54.891995 kernel: Movable zone start for each node Jul 2 00:11:54.892007 kernel: Early memory node ranges Jul 2 00:11:54.892017 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:11:54.892027 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 00:11:54.892037 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 00:11:54.892048 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 00:11:54.892058 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 00:11:54.892068 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 00:11:54.892082 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 00:11:54.892092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:11:54.892102 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:11:54.892112 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 00:11:54.892122 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:11:54.892133 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 00:11:54.892143 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 00:11:54.892154 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 00:11:54.892164 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:11:54.892178 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:11:54.892188 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:11:54.892198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:11:54.892209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:11:54.892219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:11:54.892229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:11:54.892239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:11:54.892249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:11:54.892260 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:11:54.892273 kernel: TSC deadline timer available Jul 2 00:11:54.892283 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:11:54.892293 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:11:54.892304 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:11:54.892314 kernel: kvm-guest: setup PV sched yield Jul 2 00:11:54.892324 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 00:11:54.892334 kernel: Booting paravirtualized kernel on KVM Jul 2 00:11:54.892350 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:11:54.892363 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:11:54.892377 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:11:54.892404 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:11:54.892415 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:11:54.892425 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:11:54.892441 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:11:54.892461 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:11:54.892472 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:11:54.892482 kernel: random: crng init done Jul 2 00:11:54.892492 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:11:54.892512 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:11:54.892523 kernel: Fallback order for Node 0: 0 Jul 2 00:11:54.892533 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 00:11:54.892543 kernel: Policy zone: DMA32 Jul 2 00:11:54.892553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:11:54.892564 kernel: Memory: 2388204K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 178536K reserved, 0K cma-reserved) Jul 2 00:11:54.892578 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:11:54.892594 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:11:54.892609 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:11:54.892619 kernel: Dynamic Preempt: voluntary Jul 2 00:11:54.892629 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:11:54.892640 kernel: rcu: RCU event tracing is enabled. Jul 2 00:11:54.892651 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:11:54.892677 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:11:54.892688 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:11:54.892699 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:11:54.892710 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:11:54.892721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:11:54.892731 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:11:54.892742 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:11:54.892756 kernel: Console: colour dummy device 80x25 Jul 2 00:11:54.892766 kernel: printk: console [ttyS0] enabled Jul 2 00:11:54.892776 kernel: ACPI: Core revision 20230628 Jul 2 00:11:54.892788 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:11:54.892799 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:11:54.892812 kernel: x2apic enabled Jul 2 00:11:54.892822 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:11:54.892833 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:11:54.892844 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:11:54.892855 kernel: kvm-guest: setup PV IPIs Jul 2 00:11:54.892866 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:11:54.892877 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:11:54.892888 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:11:54.892898 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:11:54.892912 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:11:54.892922 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:11:54.892933 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:11:54.892943 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:11:54.892954 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:11:54.892965 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:11:54.892976 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:11:54.892986 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:11:54.892997 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:11:54.893011 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:11:54.893021 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:11:54.893033 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:11:54.893044 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:11:54.893055 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:11:54.893065 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:11:54.893076 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:11:54.893086 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:11:54.893097 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:11:54.893111 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:11:54.893121 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:11:54.893132 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:11:54.893142 kernel: SELinux: Initializing. Jul 2 00:11:54.893153 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:11:54.893164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:11:54.893175 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:11:54.893185 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:11:54.893199 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:11:54.893210 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:11:54.893221 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:11:54.893231 kernel: ... version: 0 Jul 2 00:11:54.893242 kernel: ... bit width: 48 Jul 2 00:11:54.893253 kernel: ... generic registers: 6 Jul 2 00:11:54.893263 kernel: ... value mask: 0000ffffffffffff Jul 2 00:11:54.893274 kernel: ... max period: 00007fffffffffff Jul 2 00:11:54.893284 kernel: ... fixed-purpose events: 0 Jul 2 00:11:54.893295 kernel: ... event mask: 000000000000003f Jul 2 00:11:54.893309 kernel: signal: max sigframe size: 1776 Jul 2 00:11:54.893319 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:11:54.893330 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:11:54.893341 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:11:54.893352 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:11:54.893362 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:11:54.893373 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:11:54.893383 kernel: smpboot: Max logical packages: 1 Jul 2 00:11:54.893423 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:11:54.893438 kernel: devtmpfs: initialized Jul 2 00:11:54.893459 kernel: x86/mm: Memory block size: 128MB Jul 2 00:11:54.893470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 00:11:54.893481 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 00:11:54.893492 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 00:11:54.893503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 00:11:54.893514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 00:11:54.893526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:11:54.893537 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:11:54.893551 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:11:54.893562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:11:54.893573 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:11:54.893583 kernel: audit: type=2000 audit(1719879114.403:1): state=initialized audit_enabled=0 res=1 Jul 2 00:11:54.893594 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:11:54.893605 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:11:54.893615 kernel: cpuidle: using governor menu Jul 2 00:11:54.893626 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:11:54.893637 kernel: dca service started, version 1.12.1 Jul 2 00:11:54.893651 kernel: PCI: Using configuration type 1 for base access Jul 2 00:11:54.893662 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:11:54.893672 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:11:54.893683 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:11:54.893694 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:11:54.893704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:11:54.893715 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:11:54.893726 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:11:54.893740 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:11:54.893750 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:11:54.893761 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:11:54.893772 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:11:54.893783 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:11:54.893794 kernel: ACPI: Interpreter enabled Jul 2 00:11:54.893805 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:11:54.893816 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:11:54.893827 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:11:54.893838 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:11:54.893852 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:11:54.893863 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:11:54.894082 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:11:54.894100 kernel: acpiphp: Slot [3] registered Jul 2 00:11:54.894111 kernel: acpiphp: Slot [4] registered Jul 2 00:11:54.894122 kernel: acpiphp: Slot [5] registered Jul 2 00:11:54.894133 kernel: acpiphp: Slot [6] registered Jul 2 00:11:54.894144 kernel: acpiphp: Slot [7] registered Jul 2 00:11:54.894159 kernel: acpiphp: Slot [8] registered Jul 2 00:11:54.894169 kernel: acpiphp: Slot [9] registered Jul 2 00:11:54.894180 kernel: acpiphp: Slot [10] registered Jul 2 00:11:54.894191 kernel: acpiphp: Slot [11] registered Jul 2 00:11:54.894202 kernel: acpiphp: Slot [12] registered Jul 2 00:11:54.894213 kernel: acpiphp: Slot [13] registered Jul 2 00:11:54.894224 kernel: acpiphp: Slot [14] registered Jul 2 00:11:54.894234 kernel: acpiphp: Slot [15] registered Jul 2 00:11:54.894245 kernel: acpiphp: Slot [16] registered Jul 2 00:11:54.894259 kernel: acpiphp: Slot [17] registered Jul 2 00:11:54.894270 kernel: acpiphp: Slot [18] registered Jul 2 00:11:54.894281 kernel: acpiphp: Slot [19] registered Jul 2 00:11:54.894292 kernel: acpiphp: Slot [20] registered Jul 2 00:11:54.894302 kernel: acpiphp: Slot [21] registered Jul 2 00:11:54.894313 kernel: acpiphp: Slot [22] registered Jul 2 00:11:54.894324 kernel: acpiphp: Slot [23] registered Jul 2 00:11:54.894334 kernel: acpiphp: Slot [24] registered Jul 2 00:11:54.894345 kernel: acpiphp: Slot [25] registered Jul 2 00:11:54.894355 kernel: acpiphp: Slot [26] registered Jul 2 00:11:54.894368 kernel: acpiphp: Slot [27] registered Jul 2 00:11:54.894379 kernel: acpiphp: Slot [28] registered Jul 2 00:11:54.894404 kernel: acpiphp: Slot [29] registered Jul 2 00:11:54.894415 kernel: acpiphp: Slot [30] registered Jul 2 00:11:54.894425 kernel: acpiphp: Slot [31] registered Jul 2 00:11:54.894436 kernel: PCI host bridge to bus 0000:00 Jul 2 00:11:54.894613 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:11:54.894756 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:11:54.894905 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:11:54.895054 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:11:54.895204 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 00:11:54.895347 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:11:54.895560 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:11:54.895732 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:11:54.895906 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:11:54.896059 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:11:54.896206 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:11:54.896353 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:11:54.896527 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:11:54.896653 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:11:54.896807 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:11:54.896965 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:11:54.897121 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:11:54.897290 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:11:54.897493 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 00:11:54.897650 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 00:11:54.897805 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 00:11:54.897961 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 00:11:54.898128 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:11:54.898304 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:11:54.898499 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:11:54.898659 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 00:11:54.898814 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 00:11:54.898981 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:11:54.899144 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:11:54.899300 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 00:11:54.899496 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 00:11:54.899665 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:11:54.899825 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:11:54.899984 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 00:11:54.900142 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 00:11:54.900299 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 00:11:54.900320 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:11:54.900332 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:11:54.900343 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:11:54.900354 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:11:54.900364 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:11:54.900376 kernel: iommu: Default domain type: Translated Jul 2 00:11:54.900401 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:11:54.900413 kernel: efivars: Registered efivars operations Jul 2 00:11:54.900423 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:11:54.900438 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:11:54.900458 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 00:11:54.900469 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 00:11:54.900479 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 00:11:54.900490 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 00:11:54.900649 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:11:54.900806 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:11:54.900961 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:11:54.900982 kernel: vgaarb: loaded Jul 2 00:11:54.900993 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:11:54.901005 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:11:54.901017 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:11:54.901029 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:11:54.901043 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:11:54.901053 kernel: pnp: PnP ACPI init Jul 2 00:11:54.901225 kernel: pnp 00:02: [dma 2] Jul 2 00:11:54.901246 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:11:54.901257 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:11:54.901268 kernel: NET: Registered PF_INET protocol family Jul 2 00:11:54.901279 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:11:54.901290 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:11:54.901301 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:11:54.901312 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:11:54.901323 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:11:54.901334 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:11:54.901349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:11:54.901360 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:11:54.901370 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:11:54.901381 kernel: NET: Registered PF_XDP protocol family Jul 2 00:11:54.901652 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 00:11:54.901811 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 00:11:54.901961 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:11:54.902105 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:11:54.902262 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:11:54.902423 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:11:54.902592 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 00:11:54.902751 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:11:54.902906 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:11:54.902923 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:11:54.902934 kernel: Initialise system trusted keyrings Jul 2 00:11:54.902945 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:11:54.902961 kernel: Key type asymmetric registered Jul 2 00:11:54.902972 kernel: Asymmetric key parser 'x509' registered Jul 2 00:11:54.902983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:11:54.902994 kernel: io scheduler mq-deadline registered Jul 2 00:11:54.903005 kernel: io scheduler kyber registered Jul 2 00:11:54.903015 kernel: io scheduler bfq registered Jul 2 00:11:54.903026 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:11:54.903038 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:11:54.903049 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:11:54.903063 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:11:54.903074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:11:54.903085 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:11:54.903097 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:11:54.903128 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:11:54.903142 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:11:54.903300 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:11:54.903318 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:11:54.903529 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:11:54.903728 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:11:54 UTC (1719879114) Jul 2 00:11:54.903876 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:11:54.903893 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:11:54.903904 kernel: efifb: probing for efifb Jul 2 00:11:54.903916 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 2 00:11:54.903927 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 2 00:11:54.903939 kernel: efifb: scrolling: redraw Jul 2 00:11:54.903950 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 2 00:11:54.903966 kernel: Console: switching to colour frame buffer device 100x37 Jul 2 00:11:54.903978 kernel: fb0: EFI VGA frame buffer device Jul 2 00:11:54.903989 kernel: pstore: Using crash dump compression: deflate Jul 2 00:11:54.904001 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:11:54.904015 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:11:54.904027 kernel: Segment Routing with IPv6 Jul 2 00:11:54.904038 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:11:54.904049 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:11:54.904061 kernel: Key type dns_resolver registered Jul 2 00:11:54.904075 kernel: IPI shorthand broadcast: enabled Jul 2 00:11:54.904087 kernel: sched_clock: Marking stable (847002327, 111332474)->(997239753, -38904952) Jul 2 00:11:54.904101 kernel: registered taskstats version 1 Jul 2 00:11:54.904113 kernel: Loading compiled-in X.509 certificates Jul 2 00:11:54.904124 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:11:54.904135 kernel: Key type .fscrypt registered Jul 2 00:11:54.904150 kernel: Key type fscrypt-provisioning registered Jul 2 00:11:54.904161 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:11:54.904172 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:11:54.904184 kernel: ima: No architecture policies found Jul 2 00:11:54.904195 kernel: clk: Disabling unused clocks Jul 2 00:11:54.904207 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:11:54.904218 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:11:54.904230 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:11:54.904241 kernel: Run /init as init process Jul 2 00:11:54.904256 kernel: with arguments: Jul 2 00:11:54.904268 kernel: /init Jul 2 00:11:54.904279 kernel: with environment: Jul 2 00:11:54.904290 kernel: HOME=/ Jul 2 00:11:54.904301 kernel: TERM=linux Jul 2 00:11:54.904312 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:11:54.904326 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:11:54.904344 systemd[1]: Detected virtualization kvm. Jul 2 00:11:54.904356 systemd[1]: Detected architecture x86-64. Jul 2 00:11:54.904368 systemd[1]: Running in initrd. Jul 2 00:11:54.904380 systemd[1]: No hostname configured, using default hostname. Jul 2 00:11:54.904407 systemd[1]: Hostname set to . Jul 2 00:11:54.904419 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:11:54.904431 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:11:54.904451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:11:54.904468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:11:54.904481 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:11:54.904494 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:11:54.904506 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:11:54.904519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:11:54.904533 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:11:54.904546 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:11:54.904561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:11:54.904573 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:11:54.904585 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:11:54.904598 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:11:54.904609 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:11:54.904622 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:11:54.904634 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:11:54.904646 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:11:54.904661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:11:54.904673 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:11:54.904685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:11:54.904697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:11:54.904709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:11:54.904721 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:11:54.904733 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:11:54.904746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:11:54.904757 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:11:54.904773 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:11:54.904785 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:11:54.904797 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:11:54.904809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:54.904821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:11:54.904833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:11:54.904846 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:11:54.904885 systemd-journald[191]: Collecting audit messages is disabled. Jul 2 00:11:54.904921 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:11:54.904934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:54.904947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:11:54.904960 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:11:54.904972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:11:54.904984 systemd-journald[191]: Journal started Jul 2 00:11:54.905009 systemd-journald[191]: Runtime Journal (/run/log/journal/bf22e6b35cb3427c95f9d01d4fca1515) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:11:54.883556 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:11:54.907096 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:11:54.911805 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:11:54.917423 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:11:54.919542 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:11:54.922210 kernel: Bridge firewalling registered Jul 2 00:11:54.920645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:11:54.921934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:11:54.932894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:11:54.935133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:54.935958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:11:54.939409 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:11:54.949068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:11:54.951811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:11:54.956930 dracut-cmdline[228]: dracut-dracut-053 Jul 2 00:11:54.960204 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:11:54.998891 systemd-resolved[234]: Positive Trust Anchors: Jul 2 00:11:54.998906 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:11:54.998937 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:11:55.001460 systemd-resolved[234]: Defaulting to hostname 'linux'. Jul 2 00:11:55.002457 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:11:55.008352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:11:55.049415 kernel: SCSI subsystem initialized Jul 2 00:11:55.059407 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:11:55.072416 kernel: iscsi: registered transport (tcp) Jul 2 00:11:55.096415 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:11:55.096456 kernel: QLogic iSCSI HBA Driver Jul 2 00:11:55.140123 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:11:55.152548 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:11:55.179783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:11:55.179824 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:11:55.180844 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:11:55.224411 kernel: raid6: avx2x4 gen() 29834 MB/s Jul 2 00:11:55.241408 kernel: raid6: avx2x2 gen() 30417 MB/s Jul 2 00:11:55.258524 kernel: raid6: avx2x1 gen() 25349 MB/s Jul 2 00:11:55.258564 kernel: raid6: using algorithm avx2x2 gen() 30417 MB/s Jul 2 00:11:55.276523 kernel: raid6: .... xor() 19578 MB/s, rmw enabled Jul 2 00:11:55.276553 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:11:55.302419 kernel: xor: automatically using best checksumming function avx Jul 2 00:11:55.474418 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:11:55.485919 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:11:55.498530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:11:55.510515 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 2 00:11:55.514946 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:11:55.523574 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:11:55.536414 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jul 2 00:11:55.566984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:11:55.583555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:11:55.646893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:11:55.658624 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:11:55.674693 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:11:55.677224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:11:55.681296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:11:55.684270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:11:55.689460 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:11:55.711714 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:11:55.719602 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:11:55.719744 kernel: GPT:9289727 != 19775487 Jul 2 00:11:55.719760 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:11:55.719781 kernel: GPT:9289727 != 19775487 Jul 2 00:11:55.719796 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:11:55.719809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:55.719823 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:11:55.719837 kernel: libata version 3.00 loaded. Jul 2 00:11:55.696616 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:11:55.715414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:11:55.725442 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:11:55.735920 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:11:55.735935 kernel: AES CTR mode by8 optimization enabled Jul 2 00:11:55.735945 kernel: scsi host0: ata_piix Jul 2 00:11:55.736118 kernel: scsi host1: ata_piix Jul 2 00:11:55.736262 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:11:55.736279 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:11:55.740815 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:11:55.742448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:55.746528 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:11:55.750443 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (458) Jul 2 00:11:55.750396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:11:55.751705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:55.754953 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:55.757505 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jul 2 00:11:55.772884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:11:55.782712 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:11:55.792710 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:11:55.796341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:11:55.816137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:11:55.826732 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:11:55.834683 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:11:55.854709 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:11:55.857973 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:11:55.864259 disk-uuid[542]: Primary Header is updated. Jul 2 00:11:55.864259 disk-uuid[542]: Secondary Entries is updated. Jul 2 00:11:55.864259 disk-uuid[542]: Secondary Header is updated. Jul 2 00:11:55.868438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:55.873432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:55.881656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:11:55.886434 kernel: ata2: found unknown device (class 0) Jul 2 00:11:55.888405 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:11:55.890474 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:11:55.943726 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:11:55.962031 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:11:55.962051 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:11:56.873426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:11:56.874043 disk-uuid[544]: The operation has completed successfully. Jul 2 00:11:56.902105 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:11:56.902234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:11:56.929634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:11:56.933573 sh[580]: Success Jul 2 00:11:56.950432 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:11:56.988162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:11:57.000974 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:11:57.006375 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:11:57.017540 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:11:57.017577 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:11:57.017591 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:11:57.018906 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:11:57.019900 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:11:57.024893 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:11:57.026368 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:11:57.039585 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:11:57.042560 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:11:57.054643 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:11:57.054683 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:11:57.054699 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:11:57.058655 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:11:57.068366 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:11:57.070463 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:11:57.080280 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:11:57.087597 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:11:57.145984 ignition[677]: Ignition 2.18.0 Jul 2 00:11:57.146005 ignition[677]: Stage: fetch-offline Jul 2 00:11:57.146074 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:57.146091 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:57.146345 ignition[677]: parsed url from cmdline: "" Jul 2 00:11:57.146350 ignition[677]: no config URL provided Jul 2 00:11:57.146983 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:11:57.147000 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:11:57.147034 ignition[677]: op(1): [started] loading QEMU firmware config module Jul 2 00:11:57.147041 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:11:57.156377 ignition[677]: op(1): [finished] loading QEMU firmware config module Jul 2 00:11:57.171380 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:11:57.182662 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:11:57.200417 ignition[677]: parsing config with SHA512: a57660612b0613ed15e3a5715d8a7e44e092256366c476553c023d95ab0d31ea4686b341448cfbea1eaabe40d31b8afb0ca522f24d89e14d8fcc92832b330d4c Jul 2 00:11:57.204213 unknown[677]: fetched base config from "system" Jul 2 00:11:57.204225 unknown[677]: fetched user config from "qemu" Jul 2 00:11:57.204711 ignition[677]: fetch-offline: fetch-offline passed Jul 2 00:11:57.204784 ignition[677]: Ignition finished successfully Jul 2 00:11:57.208013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:11:57.218000 systemd-networkd[770]: lo: Link UP Jul 2 00:11:57.218012 systemd-networkd[770]: lo: Gained carrier Jul 2 00:11:57.221032 systemd-networkd[770]: Enumeration completed Jul 2 00:11:57.221146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:11:57.221857 systemd[1]: Reached target network.target - Network. Jul 2 00:11:57.222111 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:11:57.227714 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:11:57.227726 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:11:57.228723 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:11:57.232045 systemd-networkd[770]: eth0: Link UP Jul 2 00:11:57.232050 systemd-networkd[770]: eth0: Gained carrier Jul 2 00:11:57.232062 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:11:57.247540 ignition[773]: Ignition 2.18.0 Jul 2 00:11:57.247554 ignition[773]: Stage: kargs Jul 2 00:11:57.247774 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:57.247789 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:57.248882 ignition[773]: kargs: kargs passed Jul 2 00:11:57.248940 ignition[773]: Ignition finished successfully Jul 2 00:11:57.254473 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:11:57.256899 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:11:57.267632 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:11:57.282454 ignition[783]: Ignition 2.18.0 Jul 2 00:11:57.282469 ignition[783]: Stage: disks Jul 2 00:11:57.282673 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:57.282687 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:57.283775 ignition[783]: disks: disks passed Jul 2 00:11:57.283831 ignition[783]: Ignition finished successfully Jul 2 00:11:57.289459 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:11:57.290162 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:11:57.291869 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:11:57.294001 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:11:57.296360 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:11:57.299061 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:11:57.308596 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:11:57.323131 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:11:57.329736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:11:57.338495 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:11:57.454409 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:11:57.454666 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:11:57.455737 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:11:57.474474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:11:57.476381 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:11:57.477732 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:11:57.477773 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:11:57.490750 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jul 2 00:11:57.490775 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:11:57.490786 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:11:57.490797 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:11:57.477794 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:11:57.486347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:11:57.491806 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:11:57.495971 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:11:57.498725 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:11:57.536619 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:11:57.541274 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:11:57.545447 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:11:57.550873 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:11:57.652812 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:11:57.661686 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:11:57.665629 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:11:57.672406 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:11:57.694069 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:11:57.696505 ignition[915]: INFO : Ignition 2.18.0 Jul 2 00:11:57.696505 ignition[915]: INFO : Stage: mount Jul 2 00:11:57.698404 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:57.698404 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:57.698404 ignition[915]: INFO : mount: mount passed Jul 2 00:11:57.698404 ignition[915]: INFO : Ignition finished successfully Jul 2 00:11:57.700262 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:11:57.711549 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:11:58.016232 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:11:58.037555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:11:58.046977 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jul 2 00:11:58.047012 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:11:58.047027 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:11:58.048525 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:11:58.051413 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:11:58.052609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:11:58.079749 ignition[948]: INFO : Ignition 2.18.0 Jul 2 00:11:58.079749 ignition[948]: INFO : Stage: files Jul 2 00:11:58.081918 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:11:58.081918 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:11:58.081918 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:11:58.081918 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:11:58.081918 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:11:58.089006 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:11:58.090601 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:11:58.092481 unknown[948]: wrote ssh authorized keys file for user: core Jul 2 00:11:58.093716 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:11:58.096246 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:11:58.098240 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:11:58.160415 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:11:58.292985 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:11:58.292985 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:11:58.297272 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:11:58.741729 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:11:58.861582 systemd-networkd[770]: eth0: Gained IPv6LL Jul 2 00:11:58.894612 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:11:58.894612 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:11:58.899568 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:11:59.211381 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:11:59.720859 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:11:59.720859 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:11:59.725446 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 00:11:59.728039 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:11:59.755132 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:11:59.762296 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:11:59.764143 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:11:59.764143 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:11:59.764143 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:11:59.764143 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:11:59.764143 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:11:59.764143 ignition[948]: INFO : files: files passed Jul 2 00:11:59.764143 ignition[948]: INFO : Ignition finished successfully Jul 2 00:11:59.775702 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:11:59.783710 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:11:59.787042 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:11:59.789997 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:11:59.791011 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:11:59.799026 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:11:59.803017 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:11:59.803017 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:11:59.807747 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:11:59.805628 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:11:59.808542 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:11:59.823592 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:11:59.851617 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:11:59.851777 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:11:59.854734 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:11:59.857103 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:11:59.858401 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:11:59.873873 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:11:59.889931 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:11:59.904802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:11:59.916690 systemd[1]: Stopped target network.target - Network. Jul 2 00:11:59.917921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:11:59.919928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:11:59.922339 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:11:59.924504 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:11:59.924639 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:11:59.926957 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:11:59.928693 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:11:59.930783 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:11:59.933294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:11:59.935760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:11:59.938371 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:11:59.940677 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:11:59.943398 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:11:59.945839 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:11:59.948578 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:11:59.950773 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:11:59.950909 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:11:59.953361 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:11:59.955215 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:11:59.957323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:11:59.957513 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:11:59.959805 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:11:59.959917 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:11:59.962621 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:11:59.962729 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:11:59.965041 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:11:59.966884 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:11:59.970449 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:11:59.972790 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:11:59.975155 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:11:59.977009 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:11:59.977104 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:11:59.979099 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:11:59.979190 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:11:59.981763 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:11:59.981916 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:11:59.984093 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:11:59.984639 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:12:00.002622 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:12:00.004469 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:12:00.005756 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:12:00.008027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:12:00.009928 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:12:00.010216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:12:00.012579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:12:00.012755 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:12:00.022212 ignition[1002]: INFO : Ignition 2.18.0 Jul 2 00:12:00.022212 ignition[1002]: INFO : Stage: umount Jul 2 00:12:00.022212 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:12:00.022212 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:12:00.022212 ignition[1002]: INFO : umount: umount passed Jul 2 00:12:00.022212 ignition[1002]: INFO : Ignition finished successfully Jul 2 00:12:00.014453 systemd-networkd[770]: eth0: DHCPv6 lease lost Jul 2 00:12:00.020352 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:12:00.020514 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:12:00.023605 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:12:00.023787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:12:00.026138 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:12:00.026277 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:12:00.028529 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:12:00.028645 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:12:00.034815 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:12:00.034870 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:12:00.036183 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:12:00.036260 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:12:00.038579 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:12:00.038664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:12:00.040820 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:12:00.040883 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:12:00.043152 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:12:00.043210 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:12:00.053577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:12:00.055026 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:12:00.055096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:12:00.057499 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:12:00.057553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:12:00.060135 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:12:00.060196 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:12:00.061646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:12:00.061717 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:12:00.064189 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:12:00.067438 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:12:00.078660 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:12:00.078834 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:12:00.082858 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:12:00.083029 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:12:00.085309 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:12:00.085374 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:12:00.087407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:12:00.087457 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:12:00.089404 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:12:00.089457 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:12:00.091711 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:12:00.091766 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:12:00.094426 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:12:00.094483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:12:00.109761 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:12:00.111142 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:12:00.111247 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:12:00.114044 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:12:00.114117 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:12:00.116826 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:12:00.116897 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:12:00.151155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:12:00.151263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:12:00.154537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:12:00.154699 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:12:00.284010 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:12:00.284198 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:12:00.285278 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:12:00.287645 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:12:00.287707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:12:00.299827 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:12:00.310510 systemd[1]: Switching root. Jul 2 00:12:00.338531 systemd-journald[191]: Journal stopped Jul 2 00:12:02.044035 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jul 2 00:12:02.044124 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:12:02.044155 kernel: SELinux: policy capability open_perms=1 Jul 2 00:12:02.044172 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:12:02.044188 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:12:02.044209 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:12:02.044224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:12:02.044244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:12:02.044260 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:12:02.044287 kernel: audit: type=1403 audit(1719879121.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:12:02.044305 systemd[1]: Successfully loaded SELinux policy in 49.820ms. Jul 2 00:12:02.044331 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.114ms. Jul 2 00:12:02.044349 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:12:02.044366 systemd[1]: Detected virtualization kvm. Jul 2 00:12:02.044402 systemd[1]: Detected architecture x86-64. Jul 2 00:12:02.044419 systemd[1]: Detected first boot. Jul 2 00:12:02.044435 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:12:02.044451 zram_generator::config[1046]: No configuration found. Jul 2 00:12:02.044472 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:12:02.044489 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:12:02.044505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:12:02.044522 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:12:02.044540 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:12:02.044556 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:12:02.044573 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:12:02.044589 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:12:02.044608 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:12:02.044625 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:12:02.044642 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:12:02.044658 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:12:02.044675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:12:02.044694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:12:02.044710 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:12:02.044727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:12:02.044743 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:12:02.044763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:12:02.044779 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:12:02.044795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:12:02.044811 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:12:02.044827 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:12:02.044843 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:12:02.044865 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:12:02.044887 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:12:02.044908 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:12:02.044927 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:12:02.044943 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:12:02.044959 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:12:02.044976 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:12:02.044992 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:12:02.045008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:12:02.045024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:12:02.045040 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:12:02.045063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:12:02.045079 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:12:02.045097 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:12:02.045114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:12:02.045131 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:12:02.045147 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:12:02.045164 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:12:02.045181 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:12:02.045202 systemd[1]: Reached target machines.target - Containers. Jul 2 00:12:02.045219 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:12:02.045235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:12:02.045252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:12:02.045268 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:12:02.045294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:12:02.045311 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:12:02.045328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:12:02.045344 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:12:02.045364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:12:02.045381 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:12:02.045413 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:12:02.045436 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:12:02.045457 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:12:02.045476 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:12:02.045492 kernel: loop: module loaded Jul 2 00:12:02.045508 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:12:02.045524 kernel: fuse: init (API version 7.39) Jul 2 00:12:02.045543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:12:02.045560 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:12:02.045580 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:12:02.045600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:12:02.045616 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:12:02.045632 systemd[1]: Stopped verity-setup.service. Jul 2 00:12:02.045649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:12:02.045666 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:12:02.045686 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:12:02.045702 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:12:02.045719 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:12:02.045734 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:12:02.045751 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:12:02.045801 systemd-journald[1108]: Collecting audit messages is disabled. Jul 2 00:12:02.045832 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:12:02.045849 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:12:02.045867 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:12:02.045883 systemd-journald[1108]: Journal started Jul 2 00:12:02.045912 systemd-journald[1108]: Runtime Journal (/run/log/journal/bf22e6b35cb3427c95f9d01d4fca1515) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:12:01.712709 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:12:01.732683 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:12:02.049439 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:12:01.733169 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:12:02.049469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:12:02.049680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:12:02.054187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:12:02.054412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:12:02.055995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:12:02.056191 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:12:02.057973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:12:02.058186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:12:02.059572 kernel: ACPI: bus type drm_connector registered Jul 2 00:12:02.060844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:12:02.062707 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:12:02.062947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:12:02.065374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:12:02.067001 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:12:02.106746 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:12:02.144589 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:12:02.147703 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:12:02.148900 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:12:02.148945 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:12:02.151372 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:12:02.154422 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:12:02.156961 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:12:02.158185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:12:02.160076 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:12:02.162587 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:12:02.164043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:12:02.165947 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:12:02.167563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:12:02.171995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:12:02.177733 systemd-journald[1108]: Time spent on flushing to /var/log/journal/bf22e6b35cb3427c95f9d01d4fca1515 is 14.825ms for 989 entries. Jul 2 00:12:02.177733 systemd-journald[1108]: System Journal (/var/log/journal/bf22e6b35cb3427c95f9d01d4fca1515) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:12:02.376009 systemd-journald[1108]: Received client request to flush runtime journal. Jul 2 00:12:02.376227 kernel: loop0: detected capacity change from 0 to 80568 Jul 2 00:12:02.376325 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:12:02.376528 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:12:02.376558 kernel: loop1: detected capacity change from 0 to 139904 Jul 2 00:12:02.179520 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:12:02.183639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:12:02.187076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:12:02.246858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:12:02.248398 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:12:02.249970 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:12:02.251685 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:12:02.264822 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:12:02.306240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:12:02.308887 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:12:02.326060 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Jul 2 00:12:02.326074 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Jul 2 00:12:02.335025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:12:02.345753 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:12:02.352570 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:12:02.354483 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:12:02.357771 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:12:02.379529 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:12:02.404695 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:12:02.413663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:12:02.428416 kernel: loop2: detected capacity change from 0 to 210664 Jul 2 00:12:02.441076 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jul 2 00:12:02.441536 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jul 2 00:12:02.455659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:12:02.474426 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:12:02.498416 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:12:02.512419 kernel: loop5: detected capacity change from 0 to 210664 Jul 2 00:12:02.517436 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:12:02.518184 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:12:02.522456 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:12:02.523034 (sd-merge)[1185]: Merged extensions into '/usr'. Jul 2 00:12:02.527683 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:12:02.527697 systemd[1]: Reloading... Jul 2 00:12:02.599414 zram_generator::config[1211]: No configuration found. Jul 2 00:12:02.742499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:12:02.787112 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:12:02.793907 systemd[1]: Reloading finished in 265 ms. Jul 2 00:12:02.829835 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:12:02.831801 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:12:02.848710 systemd[1]: Starting ensure-sysext.service... Jul 2 00:12:02.851288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:12:02.859662 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:12:02.859783 systemd[1]: Reloading... Jul 2 00:12:02.885482 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:12:02.885832 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:12:02.886855 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:12:02.887156 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 2 00:12:02.887232 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 2 00:12:02.890355 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:12:02.890365 systemd-tmpfiles[1249]: Skipping /boot Jul 2 00:12:02.904308 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:12:02.904323 systemd-tmpfiles[1249]: Skipping /boot Jul 2 00:12:02.926962 zram_generator::config[1277]: No configuration found. Jul 2 00:12:03.069463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:12:03.126291 systemd[1]: Reloading finished in 265 ms. Jul 2 00:12:03.148682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:12:03.173958 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:12:03.177532 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:12:03.181740 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:12:03.186734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:12:03.192752 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:12:03.195173 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:12:03.210505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:12:03.215050 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:12:03.217057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:12:03.226056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:12:03.226312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:12:03.229325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:12:03.233672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:12:03.237693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:12:03.239152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:12:03.242483 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:12:03.243929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:12:03.245353 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:12:03.251369 augenrules[1340]: No rules Jul 2 00:12:03.252053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:12:03.252618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:12:03.256328 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 2 00:12:03.262729 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:12:03.264925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:12:03.265136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:12:03.267024 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:12:03.267267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:12:03.269523 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:12:03.279329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:12:03.280082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:12:03.288790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:12:03.293772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:12:03.301542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:12:03.305592 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:12:03.307053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:12:03.307320 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:12:03.309154 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:12:03.311968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:12:03.314416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:12:03.316773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:12:03.317031 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:12:03.320424 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:12:03.320685 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:12:03.322873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:12:03.323103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:12:03.325475 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:12:03.325760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:12:03.334981 systemd[1]: Finished ensure-sysext.service. Jul 2 00:12:03.360417 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1365) Jul 2 00:12:03.359738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:12:03.361119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:12:03.361245 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:12:03.365808 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:12:03.368539 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:12:03.381578 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1360) Jul 2 00:12:03.386730 systemd-resolved[1317]: Positive Trust Anchors: Jul 2 00:12:03.387182 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:12:03.387297 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:12:03.393350 systemd-resolved[1317]: Defaulting to hostname 'linux'. Jul 2 00:12:03.401271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:12:03.427472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:12:03.468714 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:12:03.503221 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:12:03.505002 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:12:03.516535 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:12:03.519045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:12:03.524444 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:12:03.525680 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:12:03.527156 systemd-networkd[1385]: lo: Link UP Jul 2 00:12:03.528075 systemd-networkd[1385]: lo: Gained carrier Jul 2 00:12:03.531189 systemd-networkd[1385]: Enumeration completed Jul 2 00:12:03.531893 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:12:03.533345 systemd[1]: Reached target network.target - Network. Jul 2 00:12:03.535062 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:12:03.535522 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:12:03.537991 systemd-networkd[1385]: eth0: Link UP Jul 2 00:12:03.538004 systemd-networkd[1385]: eth0: Gained carrier Jul 2 00:12:03.538028 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:12:03.542737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:12:03.547496 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 00:12:03.553507 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:12:03.557683 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jul 2 00:12:04.320107 systemd-resolved[1317]: Clock change detected. Flushing caches. Jul 2 00:12:04.320347 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:12:04.320557 systemd-timesyncd[1386]: Initial clock synchronization to Tue 2024-07-02 00:12:04.319845 UTC. Jul 2 00:12:04.322195 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:12:04.332662 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:12:04.334201 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 00:12:04.385501 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:12:04.379978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:12:04.386235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:12:04.386524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:12:04.393760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:12:04.471716 kernel: kvm_amd: TSC scaling supported Jul 2 00:12:04.471818 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:12:04.471838 kernel: kvm_amd: Nested Paging enabled Jul 2 00:12:04.472903 kernel: kvm_amd: LBR virtualization supported Jul 2 00:12:04.472953 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:12:04.473608 kernel: kvm_amd: Virtual GIF supported Jul 2 00:12:04.501473 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:12:04.517699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:12:04.532035 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:12:04.543822 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:12:04.555851 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:12:04.594398 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:12:04.597099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:12:04.598560 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:12:04.599871 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:12:04.601273 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:12:04.602840 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:12:04.604110 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:12:04.605412 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:12:04.606716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:12:04.606752 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:12:04.607680 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:12:04.609591 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:12:04.612816 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:12:04.623616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:12:04.626774 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:12:04.628909 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:12:04.630347 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:12:04.631418 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:12:04.631960 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:12:04.631995 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:12:04.633610 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:12:04.636278 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:12:04.638734 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:12:04.640649 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:12:04.644790 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:12:04.648058 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:12:04.652164 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:12:04.652756 jq[1420]: false Jul 2 00:12:04.655338 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:12:04.660741 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:12:04.665650 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:12:04.670642 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:12:04.672240 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:12:04.672773 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:12:04.676627 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:12:04.678874 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:12:04.681092 extend-filesystems[1421]: Found loop3 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found loop4 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found loop5 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found sr0 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda1 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda2 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda3 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found usr Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda4 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda6 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda7 Jul 2 00:12:04.681092 extend-filesystems[1421]: Found vda9 Jul 2 00:12:04.681092 extend-filesystems[1421]: Checking size of /dev/vda9 Jul 2 00:12:04.680986 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:12:04.689695 dbus-daemon[1419]: [system] SELinux support is enabled Jul 2 00:12:04.684636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:12:04.684846 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:12:04.708961 update_engine[1433]: I0702 00:12:04.705165 1433 main.cc:92] Flatcar Update Engine starting Jul 2 00:12:04.687971 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:12:04.709392 jq[1434]: true Jul 2 00:12:04.688176 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:12:04.691695 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:12:04.712509 jq[1439]: true Jul 2 00:12:04.716680 extend-filesystems[1421]: Resized partition /dev/vda9 Jul 2 00:12:04.717498 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:12:04.719095 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:12:04.719192 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:12:04.721101 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:12:04.721120 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:12:04.725777 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:12:04.726019 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:12:04.730106 extend-filesystems[1451]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:12:04.731533 update_engine[1433]: I0702 00:12:04.730957 1433 update_check_scheduler.cc:74] Next update check in 7m21s Jul 2 00:12:04.731548 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:12:04.741643 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:12:04.742696 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:12:04.746229 tar[1437]: linux-amd64/helm Jul 2 00:12:04.749061 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1361) Jul 2 00:12:04.812485 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:12:04.837033 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:12:04.844882 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:12:04.844882 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:12:04.844882 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:12:04.869415 extend-filesystems[1421]: Resized filesystem in /dev/vda9 Jul 2 00:12:04.846151 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:12:04.846504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:12:04.860159 systemd-logind[1432]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:12:04.860203 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:12:04.863794 systemd-logind[1432]: New seat seat0. Jul 2 00:12:04.865973 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:12:04.881392 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:12:04.884355 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:12:04.886250 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:12:04.887370 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:12:04.933155 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:12:04.944972 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:12:04.966199 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:12:04.966630 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:12:04.973140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:12:05.013130 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:12:05.030185 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:12:05.039623 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:12:05.041545 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:12:05.195791 containerd[1440]: time="2024-07-02T00:12:05.195595149Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:12:05.229009 containerd[1440]: time="2024-07-02T00:12:05.228928315Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:12:05.229009 containerd[1440]: time="2024-07-02T00:12:05.228983368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.232940 containerd[1440]: time="2024-07-02T00:12:05.232874348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:12:05.232940 containerd[1440]: time="2024-07-02T00:12:05.232932817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.235329 containerd[1440]: time="2024-07-02T00:12:05.235261566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:12:05.235329 containerd[1440]: time="2024-07-02T00:12:05.235318794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:12:05.235598 containerd[1440]: time="2024-07-02T00:12:05.235564725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.235701 containerd[1440]: time="2024-07-02T00:12:05.235667137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:12:05.235701 containerd[1440]: time="2024-07-02T00:12:05.235691433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.235845 containerd[1440]: time="2024-07-02T00:12:05.235820104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.236212 containerd[1440]: time="2024-07-02T00:12:05.236176403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.236248 containerd[1440]: time="2024-07-02T00:12:05.236209455Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:12:05.236248 containerd[1440]: time="2024-07-02T00:12:05.236223731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:12:05.236478 containerd[1440]: time="2024-07-02T00:12:05.236429247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:12:05.236478 containerd[1440]: time="2024-07-02T00:12:05.236474021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:12:05.236597 containerd[1440]: time="2024-07-02T00:12:05.236563920Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:12:05.236597 containerd[1440]: time="2024-07-02T00:12:05.236586762Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:12:05.248747 containerd[1440]: time="2024-07-02T00:12:05.248659392Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:12:05.248747 containerd[1440]: time="2024-07-02T00:12:05.248721067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:12:05.248747 containerd[1440]: time="2024-07-02T00:12:05.248734763Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:12:05.262056 containerd[1440]: time="2024-07-02T00:12:05.261967488Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:12:05.262056 containerd[1440]: time="2024-07-02T00:12:05.262056055Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:12:05.262056 containerd[1440]: time="2024-07-02T00:12:05.262071794Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:12:05.262410 containerd[1440]: time="2024-07-02T00:12:05.262087914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:12:05.262504 containerd[1440]: time="2024-07-02T00:12:05.262435607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:12:05.262504 containerd[1440]: time="2024-07-02T00:12:05.262473858Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:12:05.262504 containerd[1440]: time="2024-07-02T00:12:05.262486733Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:12:05.262504 containerd[1440]: time="2024-07-02T00:12:05.262502021Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262519805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262537137Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262550492Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262562525Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262576120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262588804Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262600406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.262608 containerd[1440]: time="2024-07-02T00:12:05.262612448Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:12:05.262815 containerd[1440]: time="2024-07-02T00:12:05.262749806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:12:05.263034 containerd[1440]: time="2024-07-02T00:12:05.263011637Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:12:05.263082 containerd[1440]: time="2024-07-02T00:12:05.263042445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263082 containerd[1440]: time="2024-07-02T00:12:05.263067252Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:12:05.263139 containerd[1440]: time="2024-07-02T00:12:05.263088872Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:12:05.263161 containerd[1440]: time="2024-07-02T00:12:05.263150157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263191 containerd[1440]: time="2024-07-02T00:12:05.263164174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263191 containerd[1440]: time="2024-07-02T00:12:05.263178561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263191 containerd[1440]: time="2024-07-02T00:12:05.263190232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263269 containerd[1440]: time="2024-07-02T00:12:05.263204339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263269 containerd[1440]: time="2024-07-02T00:12:05.263219848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263269 containerd[1440]: time="2024-07-02T00:12:05.263233574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263269 containerd[1440]: time="2024-07-02T00:12:05.263247059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263269 containerd[1440]: time="2024-07-02T00:12:05.263261646Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:12:05.263760 containerd[1440]: time="2024-07-02T00:12:05.263690501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263760 containerd[1440]: time="2024-07-02T00:12:05.263756355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263760 containerd[1440]: time="2024-07-02T00:12:05.263774048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263922 containerd[1440]: time="2024-07-02T00:12:05.263791911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263922 containerd[1440]: time="2024-07-02T00:12:05.263807841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263922 containerd[1440]: time="2024-07-02T00:12:05.263828039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263922 containerd[1440]: time="2024-07-02T00:12:05.263842536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.263922 containerd[1440]: time="2024-07-02T00:12:05.263854248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:12:05.264317 containerd[1440]: time="2024-07-02T00:12:05.264221607Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:12:05.264557 containerd[1440]: time="2024-07-02T00:12:05.264323679Z" level=info msg="Connect containerd service" Jul 2 00:12:05.264557 containerd[1440]: time="2024-07-02T00:12:05.264388190Z" level=info msg="using legacy CRI server" Jul 2 00:12:05.264557 containerd[1440]: time="2024-07-02T00:12:05.264398810Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:12:05.264557 containerd[1440]: time="2024-07-02T00:12:05.264517783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:12:05.265239 containerd[1440]: time="2024-07-02T00:12:05.265206184Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:12:05.265300 containerd[1440]: time="2024-07-02T00:12:05.265272068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:12:05.265330 containerd[1440]: time="2024-07-02T00:12:05.265301644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:12:05.265330 containerd[1440]: time="2024-07-02T00:12:05.265313686Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:12:05.265369 containerd[1440]: time="2024-07-02T00:12:05.265328734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:12:05.265397 containerd[1440]: time="2024-07-02T00:12:05.265331560Z" level=info msg="Start subscribing containerd event" Jul 2 00:12:05.265536 containerd[1440]: time="2024-07-02T00:12:05.265418343Z" level=info msg="Start recovering state" Jul 2 00:12:05.265566 containerd[1440]: time="2024-07-02T00:12:05.265549388Z" level=info msg="Start event monitor" Jul 2 00:12:05.265713 containerd[1440]: time="2024-07-02T00:12:05.265578553Z" level=info msg="Start snapshots syncer" Jul 2 00:12:05.265713 containerd[1440]: time="2024-07-02T00:12:05.265599663Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:12:05.265713 containerd[1440]: time="2024-07-02T00:12:05.265611174Z" level=info msg="Start streaming server" Jul 2 00:12:05.265788 containerd[1440]: time="2024-07-02T00:12:05.265709659Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:12:05.265788 containerd[1440]: time="2024-07-02T00:12:05.265778188Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:12:05.265870 containerd[1440]: time="2024-07-02T00:12:05.265842889Z" level=info msg="containerd successfully booted in 0.079985s" Jul 2 00:12:05.265964 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:12:05.386139 tar[1437]: linux-amd64/LICENSE Jul 2 00:12:05.386314 tar[1437]: linux-amd64/README.md Jul 2 00:12:05.408683 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:12:06.150712 systemd-networkd[1385]: eth0: Gained IPv6LL Jul 2 00:12:06.154609 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:12:06.156506 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:12:06.171831 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:12:06.174791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:06.177347 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:12:06.200630 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:12:06.200913 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:12:06.202968 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:12:06.205337 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:12:07.377353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:07.398060 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:12:07.401845 systemd[1]: Startup finished in 977ms (kernel) + 6.420s (initrd) + 5.558s (userspace) = 12.956s. Jul 2 00:12:07.443815 (kubelet)[1532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:12:07.972622 kubelet[1532]: E0702 00:12:07.972551 1532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:12:07.977001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:12:07.977199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:12:07.977657 systemd[1]: kubelet.service: Consumed 1.650s CPU time. Jul 2 00:12:09.741694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:12:09.743227 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:47882.service - OpenSSH per-connection server daemon (10.0.0.1:47882). Jul 2 00:12:09.792068 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 47882 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:09.794615 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:09.804556 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:12:09.819993 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:12:09.822365 systemd-logind[1432]: New session 1 of user core. Jul 2 00:12:09.834580 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:12:09.841930 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:12:09.846560 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:09.968641 systemd[1550]: Queued start job for default target default.target. Jul 2 00:12:09.977737 systemd[1550]: Created slice app.slice - User Application Slice. Jul 2 00:12:09.977764 systemd[1550]: Reached target paths.target - Paths. Jul 2 00:12:09.977777 systemd[1550]: Reached target timers.target - Timers. Jul 2 00:12:09.979578 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:12:09.993577 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:12:09.993746 systemd[1550]: Reached target sockets.target - Sockets. Jul 2 00:12:09.993767 systemd[1550]: Reached target basic.target - Basic System. Jul 2 00:12:09.993825 systemd[1550]: Reached target default.target - Main User Target. Jul 2 00:12:09.993863 systemd[1550]: Startup finished in 139ms. Jul 2 00:12:09.994052 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:12:09.995796 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:12:10.058029 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:47884.service - OpenSSH per-connection server daemon (10.0.0.1:47884). Jul 2 00:12:10.097652 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:10.099265 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:10.103222 systemd-logind[1432]: New session 2 of user core. Jul 2 00:12:10.112574 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:12:10.166190 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:10.175299 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:47884.service: Deactivated successfully. Jul 2 00:12:10.176860 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:12:10.178318 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:12:10.189729 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:47896.service - OpenSSH per-connection server daemon (10.0.0.1:47896). Jul 2 00:12:10.190696 systemd-logind[1432]: Removed session 2. Jul 2 00:12:10.220920 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 47896 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:10.222461 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:10.227663 systemd-logind[1432]: New session 3 of user core. Jul 2 00:12:10.237623 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:12:10.288944 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:10.305998 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:47896.service: Deactivated successfully. Jul 2 00:12:10.307965 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:12:10.309776 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:12:10.316784 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:47898.service - OpenSSH per-connection server daemon (10.0.0.1:47898). Jul 2 00:12:10.317926 systemd-logind[1432]: Removed session 3. Jul 2 00:12:10.347981 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 47898 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:10.349782 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:10.354057 systemd-logind[1432]: New session 4 of user core. Jul 2 00:12:10.363599 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:12:10.419877 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:10.432146 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:47898.service: Deactivated successfully. Jul 2 00:12:10.434764 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:12:10.436180 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:12:10.444769 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:47906.service - OpenSSH per-connection server daemon (10.0.0.1:47906). Jul 2 00:12:10.445848 systemd-logind[1432]: Removed session 4. Jul 2 00:12:10.479404 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 47906 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:10.480988 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:10.485484 systemd-logind[1432]: New session 5 of user core. Jul 2 00:12:10.499704 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:12:10.631688 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:12:10.631992 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:12:10.653375 sudo[1585]: pam_unix(sudo:session): session closed for user root Jul 2 00:12:10.655698 sshd[1582]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:10.672229 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:47906.service: Deactivated successfully. Jul 2 00:12:10.673910 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:12:10.675691 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:12:10.687716 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:47908.service - OpenSSH per-connection server daemon (10.0.0.1:47908). Jul 2 00:12:10.688833 systemd-logind[1432]: Removed session 5. Jul 2 00:12:10.720807 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 47908 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:10.722383 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:10.726701 systemd-logind[1432]: New session 6 of user core. Jul 2 00:12:10.742579 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:12:10.796456 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:12:10.796826 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:12:10.800854 sudo[1595]: pam_unix(sudo:session): session closed for user root Jul 2 00:12:10.808695 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:12:10.809008 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:12:10.827682 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:12:10.829395 auditctl[1598]: No rules Jul 2 00:12:10.830641 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:12:10.830900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:12:10.832605 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:12:10.863778 augenrules[1616]: No rules Jul 2 00:12:10.865650 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:12:10.867179 sudo[1594]: pam_unix(sudo:session): session closed for user root Jul 2 00:12:10.869369 sshd[1590]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:10.883126 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:47908.service: Deactivated successfully. Jul 2 00:12:10.885306 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:12:10.887157 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:12:10.897834 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:47920.service - OpenSSH per-connection server daemon (10.0.0.1:47920). Jul 2 00:12:10.898836 systemd-logind[1432]: Removed session 6. Jul 2 00:12:10.929999 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 47920 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:12:10.931940 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:12:10.936813 systemd-logind[1432]: New session 7 of user core. Jul 2 00:12:10.946620 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:12:11.001826 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:12:11.002159 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:12:11.142763 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:12:11.143179 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:12:11.492327 dockerd[1637]: time="2024-07-02T00:12:11.492169131Z" level=info msg="Starting up" Jul 2 00:12:12.112027 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport262790079-merged.mount: Deactivated successfully. Jul 2 00:12:12.189031 dockerd[1637]: time="2024-07-02T00:12:12.188957667Z" level=info msg="Loading containers: start." Jul 2 00:12:12.311473 kernel: Initializing XFRM netlink socket Jul 2 00:12:12.412084 systemd-networkd[1385]: docker0: Link UP Jul 2 00:12:12.441206 dockerd[1637]: time="2024-07-02T00:12:12.441139538Z" level=info msg="Loading containers: done." Jul 2 00:12:12.536300 dockerd[1637]: time="2024-07-02T00:12:12.536185881Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:12:12.536796 dockerd[1637]: time="2024-07-02T00:12:12.536506463Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:12:12.536796 dockerd[1637]: time="2024-07-02T00:12:12.536648409Z" level=info msg="Daemon has completed initialization" Jul 2 00:12:12.573457 dockerd[1637]: time="2024-07-02T00:12:12.573348311Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:12:12.573655 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:12:13.109221 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2671493517-merged.mount: Deactivated successfully. Jul 2 00:12:13.462303 containerd[1440]: time="2024-07-02T00:12:13.462176977Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:12:15.478735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429744537.mount: Deactivated successfully. Jul 2 00:12:17.539221 containerd[1440]: time="2024-07-02T00:12:17.539164867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:17.540135 containerd[1440]: time="2024-07-02T00:12:17.540088540Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 00:12:17.541410 containerd[1440]: time="2024-07-02T00:12:17.541382307Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:17.544856 containerd[1440]: time="2024-07-02T00:12:17.544815749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:17.545874 containerd[1440]: time="2024-07-02T00:12:17.545843767Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 4.083622929s" Jul 2 00:12:17.545931 containerd[1440]: time="2024-07-02T00:12:17.545878683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:12:17.567290 containerd[1440]: time="2024-07-02T00:12:17.567238225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:12:18.228132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:12:18.245173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:18.456542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:18.463179 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:12:18.899475 kubelet[1847]: E0702 00:12:18.899344 1847 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:12:18.907732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:12:18.907957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:12:20.578950 containerd[1440]: time="2024-07-02T00:12:20.578863188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:20.586135 containerd[1440]: time="2024-07-02T00:12:20.586067003Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 00:12:20.596485 containerd[1440]: time="2024-07-02T00:12:20.596419876Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:20.615108 containerd[1440]: time="2024-07-02T00:12:20.615041141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:20.616474 containerd[1440]: time="2024-07-02T00:12:20.616424776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 3.049146086s" Jul 2 00:12:20.616521 containerd[1440]: time="2024-07-02T00:12:20.616480741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:12:20.648024 containerd[1440]: time="2024-07-02T00:12:20.647981549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:12:22.834053 containerd[1440]: time="2024-07-02T00:12:22.833980900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:22.851882 containerd[1440]: time="2024-07-02T00:12:22.851785503Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 00:12:22.899740 containerd[1440]: time="2024-07-02T00:12:22.899652655Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:22.927753 containerd[1440]: time="2024-07-02T00:12:22.927684164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:22.928973 containerd[1440]: time="2024-07-02T00:12:22.928903211Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 2.280878601s" Jul 2 00:12:22.928973 containerd[1440]: time="2024-07-02T00:12:22.928956591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:12:22.952575 containerd[1440]: time="2024-07-02T00:12:22.952507204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:12:26.601115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507518988.mount: Deactivated successfully. Jul 2 00:12:28.026472 containerd[1440]: time="2024-07-02T00:12:28.026377346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:28.036463 containerd[1440]: time="2024-07-02T00:12:28.036357881Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 00:12:28.046773 containerd[1440]: time="2024-07-02T00:12:28.046688812Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:28.061467 containerd[1440]: time="2024-07-02T00:12:28.061388279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:28.062172 containerd[1440]: time="2024-07-02T00:12:28.062131674Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 5.109583593s" Jul 2 00:12:28.062229 containerd[1440]: time="2024-07-02T00:12:28.062171409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:12:28.089733 containerd[1440]: time="2024-07-02T00:12:28.089680367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:12:29.092822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:12:29.107639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:29.267404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:29.272560 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:12:29.319627 kubelet[1899]: E0702 00:12:29.319559 1899 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:12:29.323230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:12:29.323531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:12:31.934371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262614366.mount: Deactivated successfully. Jul 2 00:12:35.811333 containerd[1440]: time="2024-07-02T00:12:35.811246196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:35.833749 containerd[1440]: time="2024-07-02T00:12:35.833618007Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:12:35.860547 containerd[1440]: time="2024-07-02T00:12:35.860474171Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:35.874360 containerd[1440]: time="2024-07-02T00:12:35.874293637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:35.875384 containerd[1440]: time="2024-07-02T00:12:35.875319252Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 7.785586787s" Jul 2 00:12:35.875384 containerd[1440]: time="2024-07-02T00:12:35.875367823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:12:35.918486 containerd[1440]: time="2024-07-02T00:12:35.918379606Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:12:37.370842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214513393.mount: Deactivated successfully. Jul 2 00:12:37.455282 containerd[1440]: time="2024-07-02T00:12:37.455187734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:37.469673 containerd[1440]: time="2024-07-02T00:12:37.469611456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:12:37.490105 containerd[1440]: time="2024-07-02T00:12:37.490033241Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:37.512313 containerd[1440]: time="2024-07-02T00:12:37.512238866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:37.513009 containerd[1440]: time="2024-07-02T00:12:37.512946002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.594503467s" Jul 2 00:12:37.513009 containerd[1440]: time="2024-07-02T00:12:37.512996578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:12:37.536654 containerd[1440]: time="2024-07-02T00:12:37.536600673Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:12:39.042741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169298900.mount: Deactivated successfully. Jul 2 00:12:39.343040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:12:39.369828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:39.543809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:39.549983 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:12:39.788232 kubelet[1980]: E0702 00:12:39.788075 1980 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:12:39.792424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:12:39.792644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:12:42.094321 containerd[1440]: time="2024-07-02T00:12:42.094236162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:42.094998 containerd[1440]: time="2024-07-02T00:12:42.094959440Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 00:12:42.096170 containerd[1440]: time="2024-07-02T00:12:42.096140471Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:42.098970 containerd[1440]: time="2024-07-02T00:12:42.098934664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:12:42.099986 containerd[1440]: time="2024-07-02T00:12:42.099957412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.56331536s" Jul 2 00:12:42.100041 containerd[1440]: time="2024-07-02T00:12:42.099987079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:12:44.607496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:44.627665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:44.659554 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit session-7.scope)... Jul 2 00:12:44.659579 systemd[1]: Reloading... Jul 2 00:12:44.730486 zram_generator::config[2147]: No configuration found. Jul 2 00:12:45.013007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:12:45.116454 systemd[1]: Reloading finished in 456 ms. Jul 2 00:12:45.172363 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:12:45.172512 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:12:45.172814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:45.175480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:45.329604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:45.334723 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:12:45.374978 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:12:45.374978 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:12:45.374978 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:12:45.375403 kubelet[2196]: I0702 00:12:45.375014 2196 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:12:45.771651 kubelet[2196]: I0702 00:12:45.771594 2196 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:12:45.771651 kubelet[2196]: I0702 00:12:45.771639 2196 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:12:45.771882 kubelet[2196]: I0702 00:12:45.771869 2196 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:12:45.803135 kubelet[2196]: I0702 00:12:45.803080 2196 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:12:45.803699 kubelet[2196]: E0702 00:12:45.803669 2196 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.845493 kubelet[2196]: I0702 00:12:45.845436 2196 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:12:45.849014 kubelet[2196]: I0702 00:12:45.848946 2196 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:12:45.849273 kubelet[2196]: I0702 00:12:45.849003 2196 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:12:45.849982 kubelet[2196]: I0702 00:12:45.849951 2196 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:12:45.849982 kubelet[2196]: I0702 00:12:45.849977 2196 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:12:45.850183 kubelet[2196]: I0702 00:12:45.850154 2196 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:12:45.858428 kubelet[2196]: I0702 00:12:45.858399 2196 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:12:45.858428 kubelet[2196]: I0702 00:12:45.858425 2196 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:12:45.858501 kubelet[2196]: I0702 00:12:45.858486 2196 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:12:45.858732 kubelet[2196]: I0702 00:12:45.858530 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:12:45.859669 kubelet[2196]: W0702 00:12:45.859549 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.859669 kubelet[2196]: E0702 00:12:45.859633 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.860941 kubelet[2196]: W0702 00:12:45.860903 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.860941 kubelet[2196]: E0702 00:12:45.860940 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.864160 kubelet[2196]: I0702 00:12:45.864140 2196 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:12:45.867646 kubelet[2196]: I0702 00:12:45.867609 2196 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:12:45.867701 kubelet[2196]: W0702 00:12:45.867691 2196 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:12:45.868576 kubelet[2196]: I0702 00:12:45.868558 2196 server.go:1264] "Started kubelet" Jul 2 00:12:45.868775 kubelet[2196]: I0702 00:12:45.868714 2196 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:12:45.868905 kubelet[2196]: I0702 00:12:45.868832 2196 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:12:45.869333 kubelet[2196]: I0702 00:12:45.869268 2196 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:12:45.870190 kubelet[2196]: I0702 00:12:45.870165 2196 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:12:45.871486 kubelet[2196]: I0702 00:12:45.871465 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:12:45.871935 kubelet[2196]: I0702 00:12:45.871563 2196 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:12:45.871935 kubelet[2196]: I0702 00:12:45.871637 2196 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:12:45.871935 kubelet[2196]: I0702 00:12:45.871688 2196 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:12:45.871935 kubelet[2196]: W0702 00:12:45.871906 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.871935 kubelet[2196]: E0702 00:12:45.871939 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:45.872112 kubelet[2196]: E0702 00:12:45.872091 2196 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:12:45.872151 kubelet[2196]: E0702 00:12:45.872134 2196 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:12:45.874361 kubelet[2196]: I0702 00:12:45.874336 2196 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:12:45.874429 kubelet[2196]: I0702 00:12:45.874412 2196 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:12:45.875310 kubelet[2196]: I0702 00:12:45.875287 2196 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:12:45.877478 kubelet[2196]: E0702 00:12:45.876962 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3cfd3230335f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:12:45.868520287 +0000 UTC m=+0.529662605,LastTimestamp:2024-07-02 00:12:45.868520287 +0000 UTC m=+0.529662605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:12:45.877478 kubelet[2196]: E0702 00:12:45.877285 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Jul 2 00:12:45.887545 kubelet[2196]: I0702 00:12:45.887511 2196 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:12:45.887545 kubelet[2196]: I0702 00:12:45.887528 2196 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:12:45.887545 kubelet[2196]: I0702 00:12:45.887544 2196 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:12:45.973976 kubelet[2196]: I0702 00:12:45.973926 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:45.974366 kubelet[2196]: E0702 00:12:45.974326 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 2 00:12:46.078508 kubelet[2196]: E0702 00:12:46.078359 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Jul 2 00:12:46.175991 kubelet[2196]: I0702 00:12:46.175942 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:46.176394 kubelet[2196]: E0702 00:12:46.176359 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 2 00:12:46.360983 kubelet[2196]: I0702 00:12:46.360921 2196 policy_none.go:49] "None policy: Start" Jul 2 00:12:46.361929 kubelet[2196]: I0702 00:12:46.361896 2196 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:12:46.361929 kubelet[2196]: I0702 00:12:46.361930 2196 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:12:46.365161 kubelet[2196]: I0702 00:12:46.365106 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:12:46.366910 kubelet[2196]: I0702 00:12:46.366872 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:12:46.367002 kubelet[2196]: I0702 00:12:46.366915 2196 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:12:46.367002 kubelet[2196]: I0702 00:12:46.366943 2196 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:12:46.367063 kubelet[2196]: E0702 00:12:46.366997 2196 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:12:46.367873 kubelet[2196]: W0702 00:12:46.367756 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:46.367873 kubelet[2196]: E0702 00:12:46.367819 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:46.371668 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:12:46.385195 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:12:46.389878 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:12:46.403469 kubelet[2196]: I0702 00:12:46.403412 2196 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:12:46.403918 kubelet[2196]: I0702 00:12:46.403757 2196 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:12:46.403918 kubelet[2196]: I0702 00:12:46.403891 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:12:46.405007 kubelet[2196]: E0702 00:12:46.404989 2196 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:12:46.467919 kubelet[2196]: I0702 00:12:46.467855 2196 topology_manager.go:215] "Topology Admit Handler" podUID="8f82b70d552a4792d933079cb281e8bc" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:12:46.469089 kubelet[2196]: I0702 00:12:46.469068 2196 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:12:46.469786 kubelet[2196]: I0702 00:12:46.469722 2196 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:12:46.476055 systemd[1]: Created slice kubepods-burstable-pod8f82b70d552a4792d933079cb281e8bc.slice - libcontainer container kubepods-burstable-pod8f82b70d552a4792d933079cb281e8bc.slice. Jul 2 00:12:46.477110 kubelet[2196]: I0702 00:12:46.477079 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:12:46.477229 kubelet[2196]: I0702 00:12:46.477114 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f82b70d552a4792d933079cb281e8bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f82b70d552a4792d933079cb281e8bc\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:46.477229 kubelet[2196]: I0702 00:12:46.477146 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:46.477229 kubelet[2196]: I0702 00:12:46.477170 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:46.477229 kubelet[2196]: I0702 00:12:46.477193 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:46.477229 kubelet[2196]: I0702 00:12:46.477217 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f82b70d552a4792d933079cb281e8bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f82b70d552a4792d933079cb281e8bc\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:46.477385 kubelet[2196]: I0702 00:12:46.477241 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f82b70d552a4792d933079cb281e8bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f82b70d552a4792d933079cb281e8bc\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:46.477385 kubelet[2196]: I0702 00:12:46.477266 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:46.477385 kubelet[2196]: I0702 00:12:46.477289 2196 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:46.479555 kubelet[2196]: E0702 00:12:46.479517 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Jul 2 00:12:46.496962 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 00:12:46.515790 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 00:12:46.578258 kubelet[2196]: I0702 00:12:46.578223 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:46.578608 kubelet[2196]: E0702 00:12:46.578582 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 2 00:12:46.796152 kubelet[2196]: E0702 00:12:46.796014 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:46.797384 containerd[1440]: time="2024-07-02T00:12:46.797338750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f82b70d552a4792d933079cb281e8bc,Namespace:kube-system,Attempt:0,}" Jul 2 00:12:46.813896 kubelet[2196]: E0702 00:12:46.813845 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:46.814495 containerd[1440]: time="2024-07-02T00:12:46.814431874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 00:12:46.818751 kubelet[2196]: E0702 00:12:46.818718 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:46.819130 containerd[1440]: time="2024-07-02T00:12:46.819099055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 00:12:46.958501 kubelet[2196]: W0702 00:12:46.958397 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:46.958501 kubelet[2196]: E0702 00:12:46.958500 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.149854 kubelet[2196]: W0702 00:12:47.149787 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.149854 kubelet[2196]: E0702 00:12:47.149851 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.281054 kubelet[2196]: E0702 00:12:47.280985 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="1.6s" Jul 2 00:12:47.354162 kubelet[2196]: W0702 00:12:47.354057 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.354162 kubelet[2196]: E0702 00:12:47.354141 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.380322 kubelet[2196]: I0702 00:12:47.380285 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:47.380771 kubelet[2196]: E0702 00:12:47.380734 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 2 00:12:47.402615 kubelet[2196]: W0702 00:12:47.402415 2196 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.402615 kubelet[2196]: E0702 00:12:47.402525 2196 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:47.958214 kubelet[2196]: E0702 00:12:47.958151 2196 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.33:6443: connect: connection refused Jul 2 00:12:48.455602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037704638.mount: Deactivated successfully. Jul 2 00:12:48.542569 containerd[1440]: time="2024-07-02T00:12:48.542488964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:12:48.547861 containerd[1440]: time="2024-07-02T00:12:48.547783652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:12:48.563935 containerd[1440]: time="2024-07-02T00:12:48.563861454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:12:48.565375 containerd[1440]: time="2024-07-02T00:12:48.565321262Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:12:48.571572 containerd[1440]: time="2024-07-02T00:12:48.571502670Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:12:48.577278 containerd[1440]: time="2024-07-02T00:12:48.577176458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:12:48.578710 containerd[1440]: time="2024-07-02T00:12:48.578650472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:12:48.591609 containerd[1440]: time="2024-07-02T00:12:48.591558234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:12:48.592664 containerd[1440]: time="2024-07-02T00:12:48.592603575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.773430621s" Jul 2 00:12:48.611147 containerd[1440]: time="2024-07-02T00:12:48.611090374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.813614294s" Jul 2 00:12:48.612006 containerd[1440]: time="2024-07-02T00:12:48.611961336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.797398183s" Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810869023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810946711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810971968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810991666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810921332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810965616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.810994090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:12:48.812297 containerd[1440]: time="2024-07-02T00:12:48.811011774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:48.813511 containerd[1440]: time="2024-07-02T00:12:48.811744462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:12:48.813511 containerd[1440]: time="2024-07-02T00:12:48.813311073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:48.813511 containerd[1440]: time="2024-07-02T00:12:48.813330069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:12:48.813662 containerd[1440]: time="2024-07-02T00:12:48.813341761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:12:48.842795 systemd[1]: Started cri-containerd-06440f77d7068455884922bd2e4183a1978b19aaee97fb6c1e14e250d498fec6.scope - libcontainer container 06440f77d7068455884922bd2e4183a1978b19aaee97fb6c1e14e250d498fec6. Jul 2 00:12:48.844631 systemd[1]: Started cri-containerd-51023be5552cd01d9bde04f0edc2b45b16e33125a84f9e3553fe821761dcdab2.scope - libcontainer container 51023be5552cd01d9bde04f0edc2b45b16e33125a84f9e3553fe821761dcdab2. Jul 2 00:12:48.846003 systemd[1]: Started cri-containerd-c2bfc5ea2f7dc2026081c86e24d718d29206ba3a08a46a2b7ca184b0584939fe.scope - libcontainer container c2bfc5ea2f7dc2026081c86e24d718d29206ba3a08a46a2b7ca184b0584939fe. Jul 2 00:12:48.881789 kubelet[2196]: E0702 00:12:48.881604 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="3.2s" Jul 2 00:12:48.891659 containerd[1440]: time="2024-07-02T00:12:48.891564001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"06440f77d7068455884922bd2e4183a1978b19aaee97fb6c1e14e250d498fec6\"" Jul 2 00:12:48.894335 kubelet[2196]: E0702 00:12:48.894243 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:48.895892 containerd[1440]: time="2024-07-02T00:12:48.895853875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"51023be5552cd01d9bde04f0edc2b45b16e33125a84f9e3553fe821761dcdab2\"" Jul 2 00:12:48.896998 kubelet[2196]: E0702 00:12:48.896977 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:48.900348 containerd[1440]: time="2024-07-02T00:12:48.900292530Z" level=info msg="CreateContainer within sandbox \"06440f77d7068455884922bd2e4183a1978b19aaee97fb6c1e14e250d498fec6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:12:48.900718 containerd[1440]: time="2024-07-02T00:12:48.900490496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f82b70d552a4792d933079cb281e8bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2bfc5ea2f7dc2026081c86e24d718d29206ba3a08a46a2b7ca184b0584939fe\"" Jul 2 00:12:48.901605 containerd[1440]: time="2024-07-02T00:12:48.901546728Z" level=info msg="CreateContainer within sandbox \"51023be5552cd01d9bde04f0edc2b45b16e33125a84f9e3553fe821761dcdab2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:12:48.901929 kubelet[2196]: E0702 00:12:48.901892 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:48.918789 containerd[1440]: time="2024-07-02T00:12:48.918745425Z" level=info msg="CreateContainer within sandbox \"c2bfc5ea2f7dc2026081c86e24d718d29206ba3a08a46a2b7ca184b0584939fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:12:48.960994 containerd[1440]: time="2024-07-02T00:12:48.960906978Z" level=info msg="CreateContainer within sandbox \"06440f77d7068455884922bd2e4183a1978b19aaee97fb6c1e14e250d498fec6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06d4e84dbccd9416071d0ab3c6104d0437ab4bacec52324d270e2eadc5d7166f\"" Jul 2 00:12:48.961759 containerd[1440]: time="2024-07-02T00:12:48.961715812Z" level=info msg="StartContainer for \"06d4e84dbccd9416071d0ab3c6104d0437ab4bacec52324d270e2eadc5d7166f\"" Jul 2 00:12:48.970935 containerd[1440]: time="2024-07-02T00:12:48.970891638Z" level=info msg="CreateContainer within sandbox \"51023be5552cd01d9bde04f0edc2b45b16e33125a84f9e3553fe821761dcdab2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87b61e89dd8664a7f3a6f57d3389113e6b019a661fdeb0b35fa81706f55122de\"" Jul 2 00:12:48.971500 containerd[1440]: time="2024-07-02T00:12:48.971436391Z" level=info msg="StartContainer for \"87b61e89dd8664a7f3a6f57d3389113e6b019a661fdeb0b35fa81706f55122de\"" Jul 2 00:12:48.973213 containerd[1440]: time="2024-07-02T00:12:48.973169126Z" level=info msg="CreateContainer within sandbox \"c2bfc5ea2f7dc2026081c86e24d718d29206ba3a08a46a2b7ca184b0584939fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7af62c93a333fbc1fec0994c0d7d24d09d4a6ca85610c6e5cdb2c40d8c6f53a8\"" Jul 2 00:12:48.973552 containerd[1440]: time="2024-07-02T00:12:48.973522797Z" level=info msg="StartContainer for \"7af62c93a333fbc1fec0994c0d7d24d09d4a6ca85610c6e5cdb2c40d8c6f53a8\"" Jul 2 00:12:48.982944 kubelet[2196]: I0702 00:12:48.982895 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:48.983823 kubelet[2196]: E0702 00:12:48.983245 2196 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jul 2 00:12:48.991646 systemd[1]: Started cri-containerd-06d4e84dbccd9416071d0ab3c6104d0437ab4bacec52324d270e2eadc5d7166f.scope - libcontainer container 06d4e84dbccd9416071d0ab3c6104d0437ab4bacec52324d270e2eadc5d7166f. Jul 2 00:12:49.010597 systemd[1]: Started cri-containerd-87b61e89dd8664a7f3a6f57d3389113e6b019a661fdeb0b35fa81706f55122de.scope - libcontainer container 87b61e89dd8664a7f3a6f57d3389113e6b019a661fdeb0b35fa81706f55122de. Jul 2 00:12:49.016382 systemd[1]: Started cri-containerd-7af62c93a333fbc1fec0994c0d7d24d09d4a6ca85610c6e5cdb2c40d8c6f53a8.scope - libcontainer container 7af62c93a333fbc1fec0994c0d7d24d09d4a6ca85610c6e5cdb2c40d8c6f53a8. Jul 2 00:12:49.049052 containerd[1440]: time="2024-07-02T00:12:49.048977020Z" level=info msg="StartContainer for \"06d4e84dbccd9416071d0ab3c6104d0437ab4bacec52324d270e2eadc5d7166f\" returns successfully" Jul 2 00:12:49.077517 containerd[1440]: time="2024-07-02T00:12:49.077310331Z" level=info msg="StartContainer for \"87b61e89dd8664a7f3a6f57d3389113e6b019a661fdeb0b35fa81706f55122de\" returns successfully" Jul 2 00:12:49.077517 containerd[1440]: time="2024-07-02T00:12:49.077354104Z" level=info msg="StartContainer for \"7af62c93a333fbc1fec0994c0d7d24d09d4a6ca85610c6e5cdb2c40d8c6f53a8\" returns successfully" Jul 2 00:12:49.377780 kubelet[2196]: E0702 00:12:49.377680 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:49.380481 kubelet[2196]: E0702 00:12:49.380384 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:49.382105 kubelet[2196]: E0702 00:12:49.382014 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:50.384567 kubelet[2196]: E0702 00:12:50.384525 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:50.429042 kubelet[2196]: E0702 00:12:50.428991 2196 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:12:50.446321 update_engine[1433]: I0702 00:12:50.446246 1433 update_attempter.cc:509] Updating boot flags... Jul 2 00:12:50.473475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2483) Jul 2 00:12:50.509634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2483) Jul 2 00:12:50.797691 kubelet[2196]: E0702 00:12:50.797567 2196 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:12:50.862010 kubelet[2196]: I0702 00:12:50.861937 2196 apiserver.go:52] "Watching apiserver" Jul 2 00:12:50.872511 kubelet[2196]: I0702 00:12:50.872410 2196 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:12:51.239892 kubelet[2196]: E0702 00:12:51.239822 2196 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:12:52.085968 kubelet[2196]: E0702 00:12:52.085919 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:12:52.158076 kubelet[2196]: E0702 00:12:52.158035 2196 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:12:52.185360 kubelet[2196]: I0702 00:12:52.185339 2196 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:52.189381 kubelet[2196]: I0702 00:12:52.189297 2196 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:12:52.471311 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-7.scope)... Jul 2 00:12:52.471330 systemd[1]: Reloading... Jul 2 00:12:52.558544 zram_generator::config[2534]: No configuration found. Jul 2 00:12:52.674900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:12:52.774263 systemd[1]: Reloading finished in 302 ms. Jul 2 00:12:52.822271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:52.835400 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:12:52.835729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:52.846938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:12:52.989705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:12:52.994493 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:12:53.040423 kubelet[2573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:12:53.040423 kubelet[2573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:12:53.040423 kubelet[2573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:12:53.040423 kubelet[2573]: I0702 00:12:53.040165 2573 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:12:53.045019 kubelet[2573]: I0702 00:12:53.044995 2573 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:12:53.045019 kubelet[2573]: I0702 00:12:53.045017 2573 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:12:53.045193 kubelet[2573]: I0702 00:12:53.045171 2573 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:12:53.046371 kubelet[2573]: I0702 00:12:53.046341 2573 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:12:53.047636 kubelet[2573]: I0702 00:12:53.047599 2573 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:12:53.055455 kubelet[2573]: I0702 00:12:53.055390 2573 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:12:53.055783 kubelet[2573]: I0702 00:12:53.055753 2573 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:12:53.058608 kubelet[2573]: I0702 00:12:53.055841 2573 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058758 2573 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058774 2573 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058817 2573 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058912 2573 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058923 2573 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058946 2573 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:12:53.060458 kubelet[2573]: I0702 00:12:53.058965 2573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:12:53.060097 sudo[2588]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:12:53.061146 kubelet[2573]: I0702 00:12:53.060434 2573 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:12:53.061146 kubelet[2573]: I0702 00:12:53.060684 2573 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:12:53.061146 kubelet[2573]: I0702 00:12:53.061135 2573 server.go:1264] "Started kubelet" Jul 2 00:12:53.060395 sudo[2588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:12:53.061537 kubelet[2573]: I0702 00:12:53.061497 2573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:12:53.062815 kubelet[2573]: I0702 00:12:53.062752 2573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:12:53.063398 kubelet[2573]: I0702 00:12:53.063361 2573 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:12:53.063848 kubelet[2573]: I0702 00:12:53.063565 2573 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:12:53.064114 kubelet[2573]: I0702 00:12:53.064072 2573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:12:53.069726 kubelet[2573]: I0702 00:12:53.069182 2573 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:12:53.069726 kubelet[2573]: I0702 00:12:53.069536 2573 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:12:53.069726 kubelet[2573]: I0702 00:12:53.069665 2573 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:12:53.072190 kubelet[2573]: I0702 00:12:53.072174 2573 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:12:53.072326 kubelet[2573]: I0702 00:12:53.072310 2573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:12:53.079268 kubelet[2573]: E0702 00:12:53.079248 2573 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:12:53.081599 kubelet[2573]: I0702 00:12:53.081576 2573 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:12:53.089278 kubelet[2573]: I0702 00:12:53.089231 2573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:12:53.090485 kubelet[2573]: I0702 00:12:53.090433 2573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:12:53.090537 kubelet[2573]: I0702 00:12:53.090494 2573 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:12:53.090537 kubelet[2573]: I0702 00:12:53.090517 2573 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:12:53.090587 kubelet[2573]: E0702 00:12:53.090560 2573 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:12:53.131473 kubelet[2573]: I0702 00:12:53.131427 2573 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:12:53.131843 kubelet[2573]: I0702 00:12:53.131695 2573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:12:53.131843 kubelet[2573]: I0702 00:12:53.131718 2573 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:12:53.132076 kubelet[2573]: I0702 00:12:53.132009 2573 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:12:53.132076 kubelet[2573]: I0702 00:12:53.132023 2573 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:12:53.132076 kubelet[2573]: I0702 00:12:53.132042 2573 policy_none.go:49] "None policy: Start" Jul 2 00:12:53.132795 kubelet[2573]: I0702 00:12:53.132767 2573 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:12:53.133401 kubelet[2573]: I0702 00:12:53.132913 2573 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:12:53.133401 kubelet[2573]: I0702 00:12:53.133068 2573 state_mem.go:75] "Updated machine memory state" Jul 2 00:12:53.138646 kubelet[2573]: I0702 00:12:53.137956 2573 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:12:53.138646 kubelet[2573]: I0702 00:12:53.138162 2573 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:12:53.138646 kubelet[2573]: I0702 00:12:53.138303 2573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:12:53.173585 kubelet[2573]: I0702 00:12:53.173551 2573 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:12:53.179132 kubelet[2573]: I0702 00:12:53.179043 2573 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 00:12:53.179581 kubelet[2573]: I0702 00:12:53.179502 2573 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:12:53.190843 kubelet[2573]: I0702 00:12:53.190791 2573 topology_manager.go:215] "Topology Admit Handler" podUID="8f82b70d552a4792d933079cb281e8bc" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:12:53.190989 kubelet[2573]: I0702 00:12:53.190884 2573 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:12:53.190989 kubelet[2573]: I0702 00:12:53.190932 2573 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:12:53.370489 kubelet[2573]: I0702 00:12:53.370420 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:53.370489 kubelet[2573]: I0702 00:12:53.370489 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:53.370805 kubelet[2573]: I0702 00:12:53.370514 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:12:53.370805 kubelet[2573]: I0702 00:12:53.370532 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f82b70d552a4792d933079cb281e8bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f82b70d552a4792d933079cb281e8bc\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:53.370805 kubelet[2573]: I0702 00:12:53.370553 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:53.370805 kubelet[2573]: I0702 00:12:53.370576 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:53.370805 kubelet[2573]: I0702 00:12:53.370604 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f82b70d552a4792d933079cb281e8bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f82b70d552a4792d933079cb281e8bc\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:53.370985 kubelet[2573]: I0702 00:12:53.370622 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f82b70d552a4792d933079cb281e8bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f82b70d552a4792d933079cb281e8bc\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:53.370985 kubelet[2573]: I0702 00:12:53.370644 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:53.501148 kubelet[2573]: E0702 00:12:53.501113 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:53.502669 kubelet[2573]: E0702 00:12:53.502650 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:53.507435 kubelet[2573]: E0702 00:12:53.507417 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:53.565619 sudo[2588]: pam_unix(sudo:session): session closed for user root Jul 2 00:12:54.062980 kubelet[2573]: I0702 00:12:54.062916 2573 apiserver.go:52] "Watching apiserver" Jul 2 00:12:54.070098 kubelet[2573]: I0702 00:12:54.070063 2573 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:12:54.239688 kubelet[2573]: E0702 00:12:54.239461 2573 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 00:12:54.239688 kubelet[2573]: E0702 00:12:54.239544 2573 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:12:54.239895 kubelet[2573]: E0702 00:12:54.239845 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:54.240840 kubelet[2573]: E0702 00:12:54.240353 2573 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 00:12:54.240840 kubelet[2573]: E0702 00:12:54.240668 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:54.240998 kubelet[2573]: E0702 00:12:54.240973 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:54.262647 kubelet[2573]: I0702 00:12:54.262488 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.262463021 podStartE2EDuration="1.262463021s" podCreationTimestamp="2024-07-02 00:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:12:54.213462293 +0000 UTC m=+1.214915606" watchObservedRunningTime="2024-07-02 00:12:54.262463021 +0000 UTC m=+1.263916335" Jul 2 00:12:54.296790 kubelet[2573]: I0702 00:12:54.296719 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2966952840000001 podStartE2EDuration="1.296695284s" podCreationTimestamp="2024-07-02 00:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:12:54.296498593 +0000 UTC m=+1.297951906" watchObservedRunningTime="2024-07-02 00:12:54.296695284 +0000 UTC m=+1.298148597" Jul 2 00:12:54.296984 kubelet[2573]: I0702 00:12:54.296874 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.296868452 podStartE2EDuration="1.296868452s" podCreationTimestamp="2024-07-02 00:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:12:54.262361539 +0000 UTC m=+1.263814852" watchObservedRunningTime="2024-07-02 00:12:54.296868452 +0000 UTC m=+1.298321765" Jul 2 00:12:54.853028 sudo[1627]: pam_unix(sudo:session): session closed for user root Jul 2 00:12:54.855408 sshd[1624]: pam_unix(sshd:session): session closed for user core Jul 2 00:12:54.859599 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:47920.service: Deactivated successfully. Jul 2 00:12:54.861553 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:12:54.861746 systemd[1]: session-7.scope: Consumed 4.980s CPU time, 140.0M memory peak, 0B memory swap peak. Jul 2 00:12:54.862243 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:12:54.863336 systemd-logind[1432]: Removed session 7. Jul 2 00:12:55.107884 kubelet[2573]: E0702 00:12:55.107728 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:55.107884 kubelet[2573]: E0702 00:12:55.107758 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:55.107884 kubelet[2573]: E0702 00:12:55.107830 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:12:58.420767 kubelet[2573]: E0702 00:12:58.420702 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:01.497472 kubelet[2573]: E0702 00:13:01.497394 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:02.118633 kubelet[2573]: E0702 00:13:02.118589 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:03.073511 kubelet[2573]: E0702 00:13:03.072072 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:03.120170 kubelet[2573]: E0702 00:13:03.120092 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:07.319729 kubelet[2573]: I0702 00:13:07.319678 2573 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:13:07.320467 containerd[1440]: time="2024-07-02T00:13:07.320174561Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:13:07.320797 kubelet[2573]: I0702 00:13:07.320489 2573 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:13:08.109057 kubelet[2573]: I0702 00:13:08.108114 2573 topology_manager.go:215] "Topology Admit Handler" podUID="b71a3416-59e0-4a66-b32f-e5004540cea4" podNamespace="kube-system" podName="kube-proxy-qpkr4" Jul 2 00:13:08.117585 systemd[1]: Created slice kubepods-besteffort-podb71a3416_59e0_4a66_b32f_e5004540cea4.slice - libcontainer container kubepods-besteffort-podb71a3416_59e0_4a66_b32f_e5004540cea4.slice. Jul 2 00:13:08.120130 kubelet[2573]: I0702 00:13:08.119966 2573 topology_manager.go:215] "Topology Admit Handler" podUID="0fe7b374-3586-4562-8303-96dd51800931" podNamespace="kube-system" podName="cilium-kwzm7" Jul 2 00:13:08.136764 systemd[1]: Created slice kubepods-burstable-pod0fe7b374_3586_4562_8303_96dd51800931.slice - libcontainer container kubepods-burstable-pod0fe7b374_3586_4562_8303_96dd51800931.slice. Jul 2 00:13:08.255996 kubelet[2573]: I0702 00:13:08.255926 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-lib-modules\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.255996 kubelet[2573]: I0702 00:13:08.255973 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fe7b374-3586-4562-8303-96dd51800931-cilium-config-path\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.255996 kubelet[2573]: I0702 00:13:08.255988 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-hubble-tls\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.255996 kubelet[2573]: I0702 00:13:08.256003 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cni-path\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.255996 kubelet[2573]: I0702 00:13:08.256017 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-cgroup\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256351 kubelet[2573]: I0702 00:13:08.256041 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-xtables-lock\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256351 kubelet[2573]: I0702 00:13:08.256064 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzkh\" (UniqueName: \"kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-kube-api-access-rdzkh\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256351 kubelet[2573]: I0702 00:13:08.256123 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-hostproc\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256351 kubelet[2573]: I0702 00:13:08.256167 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71a3416-59e0-4a66-b32f-e5004540cea4-xtables-lock\") pod \"kube-proxy-qpkr4\" (UID: \"b71a3416-59e0-4a66-b32f-e5004540cea4\") " pod="kube-system/kube-proxy-qpkr4" Jul 2 00:13:08.256351 kubelet[2573]: I0702 00:13:08.256273 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71a3416-59e0-4a66-b32f-e5004540cea4-lib-modules\") pod \"kube-proxy-qpkr4\" (UID: \"b71a3416-59e0-4a66-b32f-e5004540cea4\") " pod="kube-system/kube-proxy-qpkr4" Jul 2 00:13:08.256560 kubelet[2573]: I0702 00:13:08.256347 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2ps\" (UniqueName: \"kubernetes.io/projected/b71a3416-59e0-4a66-b32f-e5004540cea4-kube-api-access-zc2ps\") pod \"kube-proxy-qpkr4\" (UID: \"b71a3416-59e0-4a66-b32f-e5004540cea4\") " pod="kube-system/kube-proxy-qpkr4" Jul 2 00:13:08.256560 kubelet[2573]: I0702 00:13:08.256372 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fe7b374-3586-4562-8303-96dd51800931-clustermesh-secrets\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256560 kubelet[2573]: I0702 00:13:08.256429 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b71a3416-59e0-4a66-b32f-e5004540cea4-kube-proxy\") pod \"kube-proxy-qpkr4\" (UID: \"b71a3416-59e0-4a66-b32f-e5004540cea4\") " pod="kube-system/kube-proxy-qpkr4" Jul 2 00:13:08.256560 kubelet[2573]: I0702 00:13:08.256472 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-etc-cni-netd\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256560 kubelet[2573]: I0702 00:13:08.256498 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-bpf-maps\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256812 kubelet[2573]: I0702 00:13:08.256514 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-net\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256812 kubelet[2573]: I0702 00:13:08.256548 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-kernel\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.256812 kubelet[2573]: I0702 00:13:08.256572 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-run\") pod \"cilium-kwzm7\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " pod="kube-system/cilium-kwzm7" Jul 2 00:13:08.307539 kubelet[2573]: I0702 00:13:08.306421 2573 topology_manager.go:215] "Topology Admit Handler" podUID="c296ce57-b4aa-4de5-b017-39dc5c8f4eea" podNamespace="kube-system" podName="cilium-operator-599987898-2gp8x" Jul 2 00:13:08.325087 systemd[1]: Created slice kubepods-besteffort-podc296ce57_b4aa_4de5_b017_39dc5c8f4eea.slice - libcontainer container kubepods-besteffort-podc296ce57_b4aa_4de5_b017_39dc5c8f4eea.slice. Jul 2 00:13:08.427076 kubelet[2573]: E0702 00:13:08.426919 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:08.430160 kubelet[2573]: E0702 00:13:08.430107 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:08.430991 containerd[1440]: time="2024-07-02T00:13:08.430933066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qpkr4,Uid:b71a3416-59e0-4a66-b32f-e5004540cea4,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:08.441840 kubelet[2573]: E0702 00:13:08.441798 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:08.442611 containerd[1440]: time="2024-07-02T00:13:08.442557487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwzm7,Uid:0fe7b374-3586-4562-8303-96dd51800931,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:08.457810 kubelet[2573]: I0702 00:13:08.457761 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rr6w\" (UniqueName: \"kubernetes.io/projected/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-kube-api-access-4rr6w\") pod \"cilium-operator-599987898-2gp8x\" (UID: \"c296ce57-b4aa-4de5-b017-39dc5c8f4eea\") " pod="kube-system/cilium-operator-599987898-2gp8x" Jul 2 00:13:08.457810 kubelet[2573]: I0702 00:13:08.457816 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-cilium-config-path\") pod \"cilium-operator-599987898-2gp8x\" (UID: \"c296ce57-b4aa-4de5-b017-39dc5c8f4eea\") " pod="kube-system/cilium-operator-599987898-2gp8x" Jul 2 00:13:08.472938 containerd[1440]: time="2024-07-02T00:13:08.472621904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:08.472938 containerd[1440]: time="2024-07-02T00:13:08.472704831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:08.472938 containerd[1440]: time="2024-07-02T00:13:08.472737552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:08.472938 containerd[1440]: time="2024-07-02T00:13:08.472767668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:08.479761 containerd[1440]: time="2024-07-02T00:13:08.479566420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:08.479761 containerd[1440]: time="2024-07-02T00:13:08.479634549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:08.479761 containerd[1440]: time="2024-07-02T00:13:08.479660648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:08.479994 containerd[1440]: time="2024-07-02T00:13:08.479675185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:08.504780 systemd[1]: Started cri-containerd-897f885fc3dd9c98c48190f60c0e02188871a4720e7257e89be955bddc16b316.scope - libcontainer container 897f885fc3dd9c98c48190f60c0e02188871a4720e7257e89be955bddc16b316. Jul 2 00:13:08.508327 systemd[1]: Started cri-containerd-3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732.scope - libcontainer container 3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732. Jul 2 00:13:08.535864 containerd[1440]: time="2024-07-02T00:13:08.535786409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qpkr4,Uid:b71a3416-59e0-4a66-b32f-e5004540cea4,Namespace:kube-system,Attempt:0,} returns sandbox id \"897f885fc3dd9c98c48190f60c0e02188871a4720e7257e89be955bddc16b316\"" Jul 2 00:13:08.536926 kubelet[2573]: E0702 00:13:08.536885 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:08.540662 containerd[1440]: time="2024-07-02T00:13:08.540539422Z" level=info msg="CreateContainer within sandbox \"897f885fc3dd9c98c48190f60c0e02188871a4720e7257e89be955bddc16b316\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:13:08.543689 containerd[1440]: time="2024-07-02T00:13:08.543635217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwzm7,Uid:0fe7b374-3586-4562-8303-96dd51800931,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\"" Jul 2 00:13:08.544513 kubelet[2573]: E0702 00:13:08.544465 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:08.545772 containerd[1440]: time="2024-07-02T00:13:08.545746500Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:13:08.571045 containerd[1440]: time="2024-07-02T00:13:08.570981490Z" level=info msg="CreateContainer within sandbox \"897f885fc3dd9c98c48190f60c0e02188871a4720e7257e89be955bddc16b316\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a3f8f3cb9e431175fff6cb7829c9a3fd062bad68bda58ad5c8acf53c6f8ff5b\"" Jul 2 00:13:08.571798 containerd[1440]: time="2024-07-02T00:13:08.571743534Z" level=info msg="StartContainer for \"9a3f8f3cb9e431175fff6cb7829c9a3fd062bad68bda58ad5c8acf53c6f8ff5b\"" Jul 2 00:13:08.603607 systemd[1]: Started cri-containerd-9a3f8f3cb9e431175fff6cb7829c9a3fd062bad68bda58ad5c8acf53c6f8ff5b.scope - libcontainer container 9a3f8f3cb9e431175fff6cb7829c9a3fd062bad68bda58ad5c8acf53c6f8ff5b. Jul 2 00:13:08.628335 kubelet[2573]: E0702 00:13:08.628306 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:08.630235 containerd[1440]: time="2024-07-02T00:13:08.629596254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2gp8x,Uid:c296ce57-b4aa-4de5-b017-39dc5c8f4eea,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:08.645432 containerd[1440]: time="2024-07-02T00:13:08.645033491Z" level=info msg="StartContainer for \"9a3f8f3cb9e431175fff6cb7829c9a3fd062bad68bda58ad5c8acf53c6f8ff5b\" returns successfully" Jul 2 00:13:08.668530 containerd[1440]: time="2024-07-02T00:13:08.668358276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:08.668530 containerd[1440]: time="2024-07-02T00:13:08.668485827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:08.668530 containerd[1440]: time="2024-07-02T00:13:08.668501165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:08.668745 containerd[1440]: time="2024-07-02T00:13:08.668513609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:08.689629 systemd[1]: Started cri-containerd-52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0.scope - libcontainer container 52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0. Jul 2 00:13:08.739849 containerd[1440]: time="2024-07-02T00:13:08.739801829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2gp8x,Uid:c296ce57-b4aa-4de5-b017-39dc5c8f4eea,Namespace:kube-system,Attempt:0,} returns sandbox id \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\"" Jul 2 00:13:08.740877 kubelet[2573]: E0702 00:13:08.740763 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:09.133831 kubelet[2573]: E0702 00:13:09.133760 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:13.105470 kubelet[2573]: I0702 00:13:13.103414 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qpkr4" podStartSLOduration=5.103387046 podStartE2EDuration="5.103387046s" podCreationTimestamp="2024-07-02 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:13:09.141343841 +0000 UTC m=+16.142797154" watchObservedRunningTime="2024-07-02 00:13:13.103387046 +0000 UTC m=+20.104840359" Jul 2 00:13:17.926510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980488612.mount: Deactivated successfully. Jul 2 00:13:24.763953 containerd[1440]: time="2024-07-02T00:13:24.763868303Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:24.805237 containerd[1440]: time="2024-07-02T00:13:24.805133009Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735339" Jul 2 00:13:24.846674 containerd[1440]: time="2024-07-02T00:13:24.846600765Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:24.848784 containerd[1440]: time="2024-07-02T00:13:24.848687103Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.302902141s" Jul 2 00:13:24.848784 containerd[1440]: time="2024-07-02T00:13:24.848750864Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:13:24.852984 containerd[1440]: time="2024-07-02T00:13:24.852943287Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:13:24.861853 containerd[1440]: time="2024-07-02T00:13:24.861799048Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:13:24.912938 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:50648.service - OpenSSH per-connection server daemon (10.0.0.1:50648). Jul 2 00:13:24.998648 sshd[2963]: Accepted publickey for core from 10.0.0.1 port 50648 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:25.000253 sshd[2963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:25.012428 systemd-logind[1432]: New session 8 of user core. Jul 2 00:13:25.021604 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:13:25.226870 sshd[2963]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:25.231515 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:50648.service: Deactivated successfully. Jul 2 00:13:25.233838 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:13:25.234549 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:13:25.235488 systemd-logind[1432]: Removed session 8. Jul 2 00:13:25.387117 containerd[1440]: time="2024-07-02T00:13:25.387031315Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\"" Jul 2 00:13:25.392131 containerd[1440]: time="2024-07-02T00:13:25.392068133Z" level=info msg="StartContainer for \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\"" Jul 2 00:13:25.423729 systemd[1]: Started cri-containerd-a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6.scope - libcontainer container a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6. Jul 2 00:13:25.470890 systemd[1]: cri-containerd-a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6.scope: Deactivated successfully. Jul 2 00:13:25.481550 containerd[1440]: time="2024-07-02T00:13:25.481459440Z" level=info msg="StartContainer for \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\" returns successfully" Jul 2 00:13:26.094170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6-rootfs.mount: Deactivated successfully. Jul 2 00:13:26.258304 kubelet[2573]: E0702 00:13:26.258237 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:26.298545 containerd[1440]: time="2024-07-02T00:13:26.298437741Z" level=info msg="shim disconnected" id=a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6 namespace=k8s.io Jul 2 00:13:26.298545 containerd[1440]: time="2024-07-02T00:13:26.298533402Z" level=warning msg="cleaning up after shim disconnected" id=a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6 namespace=k8s.io Jul 2 00:13:26.298545 containerd[1440]: time="2024-07-02T00:13:26.298546446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:27.257816 kubelet[2573]: E0702 00:13:27.257761 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:27.260771 containerd[1440]: time="2024-07-02T00:13:27.260713513Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:13:27.997520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670689638.mount: Deactivated successfully. Jul 2 00:13:28.029333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531655175.mount: Deactivated successfully. Jul 2 00:13:28.042485 containerd[1440]: time="2024-07-02T00:13:28.042325899Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\"" Jul 2 00:13:28.043102 containerd[1440]: time="2024-07-02T00:13:28.043066871Z" level=info msg="StartContainer for \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\"" Jul 2 00:13:28.080743 systemd[1]: Started cri-containerd-ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020.scope - libcontainer container ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020. Jul 2 00:13:28.116968 containerd[1440]: time="2024-07-02T00:13:28.116910557Z" level=info msg="StartContainer for \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\" returns successfully" Jul 2 00:13:28.131797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:13:28.132060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:13:28.132142 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:13:28.140838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:13:28.141181 systemd[1]: cri-containerd-ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020.scope: Deactivated successfully. Jul 2 00:13:28.211390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:13:28.216712 containerd[1440]: time="2024-07-02T00:13:28.216556078Z" level=info msg="shim disconnected" id=ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020 namespace=k8s.io Jul 2 00:13:28.216712 containerd[1440]: time="2024-07-02T00:13:28.216642622Z" level=warning msg="cleaning up after shim disconnected" id=ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020 namespace=k8s.io Jul 2 00:13:28.216712 containerd[1440]: time="2024-07-02T00:13:28.216654464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:28.267530 kubelet[2573]: E0702 00:13:28.267224 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:28.278569 containerd[1440]: time="2024-07-02T00:13:28.276477946Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:13:28.307482 containerd[1440]: time="2024-07-02T00:13:28.307409142Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\"" Jul 2 00:13:28.308007 containerd[1440]: time="2024-07-02T00:13:28.307972850Z" level=info msg="StartContainer for \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\"" Jul 2 00:13:28.343656 systemd[1]: Started cri-containerd-5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a.scope - libcontainer container 5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a. Jul 2 00:13:28.381918 systemd[1]: cri-containerd-5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a.scope: Deactivated successfully. Jul 2 00:13:28.667962 containerd[1440]: time="2024-07-02T00:13:28.667886424Z" level=info msg="StartContainer for \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\" returns successfully" Jul 2 00:13:28.929519 containerd[1440]: time="2024-07-02T00:13:28.929354429Z" level=info msg="shim disconnected" id=5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a namespace=k8s.io Jul 2 00:13:28.929519 containerd[1440]: time="2024-07-02T00:13:28.929415513Z" level=warning msg="cleaning up after shim disconnected" id=5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a namespace=k8s.io Jul 2 00:13:28.929519 containerd[1440]: time="2024-07-02T00:13:28.929427616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:28.994531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020-rootfs.mount: Deactivated successfully. Jul 2 00:13:29.254691 containerd[1440]: time="2024-07-02T00:13:29.254414088Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:29.257578 containerd[1440]: time="2024-07-02T00:13:29.257478080Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907265" Jul 2 00:13:29.259404 containerd[1440]: time="2024-07-02T00:13:29.259325909Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:29.261382 containerd[1440]: time="2024-07-02T00:13:29.261303252Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.408321362s" Jul 2 00:13:29.261382 containerd[1440]: time="2024-07-02T00:13:29.261351903Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:13:29.266085 containerd[1440]: time="2024-07-02T00:13:29.265896816Z" level=info msg="CreateContainer within sandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:13:29.271284 kubelet[2573]: E0702 00:13:29.271230 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:29.273410 containerd[1440]: time="2024-07-02T00:13:29.273356682Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:13:29.336128 containerd[1440]: time="2024-07-02T00:13:29.336050738Z" level=info msg="CreateContainer within sandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\"" Jul 2 00:13:29.336807 containerd[1440]: time="2024-07-02T00:13:29.336738820Z" level=info msg="StartContainer for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\"" Jul 2 00:13:29.339536 containerd[1440]: time="2024-07-02T00:13:29.339467022Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\"" Jul 2 00:13:29.340670 containerd[1440]: time="2024-07-02T00:13:29.340109469Z" level=info msg="StartContainer for \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\"" Jul 2 00:13:29.370672 systemd[1]: Started cri-containerd-93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff.scope - libcontainer container 93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff. Jul 2 00:13:29.374023 systemd[1]: Started cri-containerd-f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391.scope - libcontainer container f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391. Jul 2 00:13:29.404783 systemd[1]: cri-containerd-f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391.scope: Deactivated successfully. Jul 2 00:13:29.406135 containerd[1440]: time="2024-07-02T00:13:29.406009084Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fe7b374_3586_4562_8303_96dd51800931.slice/cri-containerd-f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391.scope/memory.events\": no such file or directory" Jul 2 00:13:29.431007 containerd[1440]: time="2024-07-02T00:13:29.430874578Z" level=info msg="StartContainer for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" returns successfully" Jul 2 00:13:29.431215 containerd[1440]: time="2024-07-02T00:13:29.430919001Z" level=info msg="StartContainer for \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\" returns successfully" Jul 2 00:13:29.770200 containerd[1440]: time="2024-07-02T00:13:29.770104144Z" level=info msg="shim disconnected" id=f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391 namespace=k8s.io Jul 2 00:13:29.770200 containerd[1440]: time="2024-07-02T00:13:29.770190125Z" level=warning msg="cleaning up after shim disconnected" id=f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391 namespace=k8s.io Jul 2 00:13:29.770200 containerd[1440]: time="2024-07-02T00:13:29.770203420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:13:30.239171 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:44834.service - OpenSSH per-connection server daemon (10.0.0.1:44834). Jul 2 00:13:30.274588 kubelet[2573]: E0702 00:13:30.274549 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:30.276794 kubelet[2573]: E0702 00:13:30.276736 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:30.279807 containerd[1440]: time="2024-07-02T00:13:30.279757402Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:13:30.328391 sshd[3268]: Accepted publickey for core from 10.0.0.1 port 44834 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:30.330344 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:30.335815 systemd-logind[1432]: New session 9 of user core. Jul 2 00:13:30.344775 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:13:30.429215 kubelet[2573]: I0702 00:13:30.429119 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2gp8x" podStartSLOduration=1.908900311 podStartE2EDuration="22.429096269s" podCreationTimestamp="2024-07-02 00:13:08 +0000 UTC" firstStartedPulling="2024-07-02 00:13:08.743172381 +0000 UTC m=+15.744625694" lastFinishedPulling="2024-07-02 00:13:29.263368339 +0000 UTC m=+36.264821652" observedRunningTime="2024-07-02 00:13:30.428498516 +0000 UTC m=+37.429951829" watchObservedRunningTime="2024-07-02 00:13:30.429096269 +0000 UTC m=+37.430549582" Jul 2 00:13:30.536239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457829582.mount: Deactivated successfully. Jul 2 00:13:30.738633 containerd[1440]: time="2024-07-02T00:13:30.738492076Z" level=info msg="CreateContainer within sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\"" Jul 2 00:13:30.739891 containerd[1440]: time="2024-07-02T00:13:30.739848591Z" level=info msg="StartContainer for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\"" Jul 2 00:13:30.753690 sshd[3268]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:30.757864 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:44834.service: Deactivated successfully. Jul 2 00:13:30.761981 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:13:30.781302 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:13:30.782321 systemd-logind[1432]: Removed session 9. Jul 2 00:13:30.801637 systemd[1]: Started cri-containerd-38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340.scope - libcontainer container 38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340. Jul 2 00:13:30.986145 containerd[1440]: time="2024-07-02T00:13:30.986087769Z" level=info msg="StartContainer for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" returns successfully" Jul 2 00:13:31.008408 systemd[1]: run-containerd-runc-k8s.io-38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340-runc.mjLSS6.mount: Deactivated successfully. Jul 2 00:13:31.120423 kubelet[2573]: I0702 00:13:31.119636 2573 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:13:31.283327 kubelet[2573]: E0702 00:13:31.283288 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:31.284500 kubelet[2573]: E0702 00:13:31.284063 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:31.364639 kubelet[2573]: I0702 00:13:31.364584 2573 topology_manager.go:215] "Topology Admit Handler" podUID="fa3a8fd7-df50-4e83-804a-be1aa354e12b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cx964" Jul 2 00:13:31.364874 kubelet[2573]: I0702 00:13:31.364830 2573 topology_manager.go:215] "Topology Admit Handler" podUID="f7488958-4410-4cc3-a529-a4b698c6082d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2vdwv" Jul 2 00:13:31.374488 systemd[1]: Created slice kubepods-burstable-podf7488958_4410_4cc3_a529_a4b698c6082d.slice - libcontainer container kubepods-burstable-podf7488958_4410_4cc3_a529_a4b698c6082d.slice. Jul 2 00:13:31.380860 systemd[1]: Created slice kubepods-burstable-podfa3a8fd7_df50_4e83_804a_be1aa354e12b.slice - libcontainer container kubepods-burstable-podfa3a8fd7_df50_4e83_804a_be1aa354e12b.slice. Jul 2 00:13:31.518682 kubelet[2573]: I0702 00:13:31.518595 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcc7\" (UniqueName: \"kubernetes.io/projected/fa3a8fd7-df50-4e83-804a-be1aa354e12b-kube-api-access-qpcc7\") pod \"coredns-7db6d8ff4d-cx964\" (UID: \"fa3a8fd7-df50-4e83-804a-be1aa354e12b\") " pod="kube-system/coredns-7db6d8ff4d-cx964" Jul 2 00:13:31.518682 kubelet[2573]: I0702 00:13:31.518663 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqp7n\" (UniqueName: \"kubernetes.io/projected/f7488958-4410-4cc3-a529-a4b698c6082d-kube-api-access-rqp7n\") pod \"coredns-7db6d8ff4d-2vdwv\" (UID: \"f7488958-4410-4cc3-a529-a4b698c6082d\") " pod="kube-system/coredns-7db6d8ff4d-2vdwv" Jul 2 00:13:31.518682 kubelet[2573]: I0702 00:13:31.518693 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa3a8fd7-df50-4e83-804a-be1aa354e12b-config-volume\") pod \"coredns-7db6d8ff4d-cx964\" (UID: \"fa3a8fd7-df50-4e83-804a-be1aa354e12b\") " pod="kube-system/coredns-7db6d8ff4d-cx964" Jul 2 00:13:31.518966 kubelet[2573]: I0702 00:13:31.518738 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7488958-4410-4cc3-a529-a4b698c6082d-config-volume\") pod \"coredns-7db6d8ff4d-2vdwv\" (UID: \"f7488958-4410-4cc3-a529-a4b698c6082d\") " pod="kube-system/coredns-7db6d8ff4d-2vdwv" Jul 2 00:13:31.602431 kubelet[2573]: I0702 00:13:31.602356 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kwzm7" podStartSLOduration=7.296708275 podStartE2EDuration="23.602333423s" podCreationTimestamp="2024-07-02 00:13:08 +0000 UTC" firstStartedPulling="2024-07-02 00:13:08.54526834 +0000 UTC m=+15.546721653" lastFinishedPulling="2024-07-02 00:13:24.850893488 +0000 UTC m=+31.852346801" observedRunningTime="2024-07-02 00:13:31.396177874 +0000 UTC m=+38.397631208" watchObservedRunningTime="2024-07-02 00:13:31.602333423 +0000 UTC m=+38.603786736" Jul 2 00:13:31.979545 kubelet[2573]: E0702 00:13:31.979501 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:31.983799 kubelet[2573]: E0702 00:13:31.983769 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:31.984254 containerd[1440]: time="2024-07-02T00:13:31.984201576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cx964,Uid:fa3a8fd7-df50-4e83-804a-be1aa354e12b,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:31.989168 containerd[1440]: time="2024-07-02T00:13:31.989119408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2vdwv,Uid:f7488958-4410-4cc3-a529-a4b698c6082d,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:32.285654 kubelet[2573]: E0702 00:13:32.285526 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:33.288980 kubelet[2573]: E0702 00:13:33.288742 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:33.299268 systemd-networkd[1385]: cilium_host: Link UP Jul 2 00:13:33.300901 systemd-networkd[1385]: cilium_net: Link UP Jul 2 00:13:33.302732 systemd-networkd[1385]: cilium_net: Gained carrier Jul 2 00:13:33.303129 systemd-networkd[1385]: cilium_host: Gained carrier Jul 2 00:13:33.303334 systemd-networkd[1385]: cilium_net: Gained IPv6LL Jul 2 00:13:33.303601 systemd-networkd[1385]: cilium_host: Gained IPv6LL Jul 2 00:13:33.428553 systemd-networkd[1385]: cilium_vxlan: Link UP Jul 2 00:13:33.428565 systemd-networkd[1385]: cilium_vxlan: Gained carrier Jul 2 00:13:33.685483 kernel: NET: Registered PF_ALG protocol family Jul 2 00:13:34.415029 systemd-networkd[1385]: lxc_health: Link UP Jul 2 00:13:34.420587 systemd-networkd[1385]: lxc_health: Gained carrier Jul 2 00:13:34.446833 kubelet[2573]: E0702 00:13:34.446783 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:34.599676 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Jul 2 00:13:34.741622 systemd-networkd[1385]: lxcbb4399fb0833: Link UP Jul 2 00:13:34.751623 kernel: eth0: renamed from tmpdb879 Jul 2 00:13:34.782486 kernel: eth0: renamed from tmpa48a7 Jul 2 00:13:34.793229 systemd-networkd[1385]: lxcbb4399fb0833: Gained carrier Jul 2 00:13:34.794071 systemd-networkd[1385]: lxccc45e466c6c6: Link UP Jul 2 00:13:34.794616 systemd-networkd[1385]: lxccc45e466c6c6: Gained carrier Jul 2 00:13:35.768272 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:44848.service - OpenSSH per-connection server daemon (10.0.0.1:44848). Jul 2 00:13:35.814337 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 44848 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:35.816315 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:35.818880 kubelet[2573]: I0702 00:13:35.817415 2573 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:13:35.818880 kubelet[2573]: E0702 00:13:35.818411 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:35.824976 systemd-logind[1432]: New session 10 of user core. Jul 2 00:13:35.833113 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:13:35.973677 sshd[3805]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:35.979215 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:44848.service: Deactivated successfully. Jul 2 00:13:35.982096 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:13:35.983242 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:13:35.984608 systemd-logind[1432]: Removed session 10. Jul 2 00:13:36.262637 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jul 2 00:13:36.295066 kubelet[2573]: E0702 00:13:36.295020 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:36.390682 systemd-networkd[1385]: lxccc45e466c6c6: Gained IPv6LL Jul 2 00:13:36.646687 systemd-networkd[1385]: lxcbb4399fb0833: Gained IPv6LL Jul 2 00:13:38.450061 containerd[1440]: time="2024-07-02T00:13:38.449907685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:38.450061 containerd[1440]: time="2024-07-02T00:13:38.449972797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:38.450636 containerd[1440]: time="2024-07-02T00:13:38.450199142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:38.450636 containerd[1440]: time="2024-07-02T00:13:38.450264574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:38.450636 containerd[1440]: time="2024-07-02T00:13:38.450279793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:38.450636 containerd[1440]: time="2024-07-02T00:13:38.450289431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:38.450636 containerd[1440]: time="2024-07-02T00:13:38.450547857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:38.450636 containerd[1440]: time="2024-07-02T00:13:38.450570489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:38.478625 systemd[1]: Started cri-containerd-a48a7b18215529e8d30957f6abf21bf3b8ada610e67896bfe59efca763b70947.scope - libcontainer container a48a7b18215529e8d30957f6abf21bf3b8ada610e67896bfe59efca763b70947. Jul 2 00:13:38.480670 systemd[1]: Started cri-containerd-db8798c5758230ed2ebf424d12918ff301e983b334b1f4d10ee74bc0cf51ab6f.scope - libcontainer container db8798c5758230ed2ebf424d12918ff301e983b334b1f4d10ee74bc0cf51ab6f. Jul 2 00:13:38.492857 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:13:38.498061 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:13:38.525828 containerd[1440]: time="2024-07-02T00:13:38.525779195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2vdwv,Uid:f7488958-4410-4cc3-a529-a4b698c6082d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a48a7b18215529e8d30957f6abf21bf3b8ada610e67896bfe59efca763b70947\"" Jul 2 00:13:38.527366 kubelet[2573]: E0702 00:13:38.527324 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:38.531795 containerd[1440]: time="2024-07-02T00:13:38.531701511Z" level=info msg="CreateContainer within sandbox \"a48a7b18215529e8d30957f6abf21bf3b8ada610e67896bfe59efca763b70947\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:13:38.531973 containerd[1440]: time="2024-07-02T00:13:38.531936381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cx964,Uid:fa3a8fd7-df50-4e83-804a-be1aa354e12b,Namespace:kube-system,Attempt:0,} returns sandbox id \"db8798c5758230ed2ebf424d12918ff301e983b334b1f4d10ee74bc0cf51ab6f\"" Jul 2 00:13:38.533357 kubelet[2573]: E0702 00:13:38.533257 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:38.538345 containerd[1440]: time="2024-07-02T00:13:38.538289646Z" level=info msg="CreateContainer within sandbox \"db8798c5758230ed2ebf424d12918ff301e983b334b1f4d10ee74bc0cf51ab6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:13:38.572080 containerd[1440]: time="2024-07-02T00:13:38.571984445Z" level=info msg="CreateContainer within sandbox \"a48a7b18215529e8d30957f6abf21bf3b8ada610e67896bfe59efca763b70947\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8da1b20e346ca35011e02805c4c836ff35c1e260bb79ac353936faee1953c3da\"" Jul 2 00:13:38.572956 containerd[1440]: time="2024-07-02T00:13:38.572902208Z" level=info msg="StartContainer for \"8da1b20e346ca35011e02805c4c836ff35c1e260bb79ac353936faee1953c3da\"" Jul 2 00:13:38.574995 containerd[1440]: time="2024-07-02T00:13:38.574950583Z" level=info msg="CreateContainer within sandbox \"db8798c5758230ed2ebf424d12918ff301e983b334b1f4d10ee74bc0cf51ab6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0834c09502a3b5c56b7f0230111e844eda3cf715678354f81f5c447460a3171\"" Jul 2 00:13:38.576826 containerd[1440]: time="2024-07-02T00:13:38.575595333Z" level=info msg="StartContainer for \"e0834c09502a3b5c56b7f0230111e844eda3cf715678354f81f5c447460a3171\"" Jul 2 00:13:38.607742 systemd[1]: Started cri-containerd-8da1b20e346ca35011e02805c4c836ff35c1e260bb79ac353936faee1953c3da.scope - libcontainer container 8da1b20e346ca35011e02805c4c836ff35c1e260bb79ac353936faee1953c3da. Jul 2 00:13:38.612608 systemd[1]: Started cri-containerd-e0834c09502a3b5c56b7f0230111e844eda3cf715678354f81f5c447460a3171.scope - libcontainer container e0834c09502a3b5c56b7f0230111e844eda3cf715678354f81f5c447460a3171. Jul 2 00:13:38.644697 containerd[1440]: time="2024-07-02T00:13:38.644625323Z" level=info msg="StartContainer for \"8da1b20e346ca35011e02805c4c836ff35c1e260bb79ac353936faee1953c3da\" returns successfully" Jul 2 00:13:38.650060 containerd[1440]: time="2024-07-02T00:13:38.649973741Z" level=info msg="StartContainer for \"e0834c09502a3b5c56b7f0230111e844eda3cf715678354f81f5c447460a3171\" returns successfully" Jul 2 00:13:39.303158 kubelet[2573]: E0702 00:13:39.302707 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:39.305933 kubelet[2573]: E0702 00:13:39.305349 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:39.456148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365935930.mount: Deactivated successfully. Jul 2 00:13:39.729646 kubelet[2573]: I0702 00:13:39.729345 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2vdwv" podStartSLOduration=31.729327037 podStartE2EDuration="31.729327037s" podCreationTimestamp="2024-07-02 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:13:39.534024841 +0000 UTC m=+46.535478155" watchObservedRunningTime="2024-07-02 00:13:39.729327037 +0000 UTC m=+46.730780350" Jul 2 00:13:39.811674 kubelet[2573]: I0702 00:13:39.811266 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cx964" podStartSLOduration=31.81124455 podStartE2EDuration="31.81124455s" podCreationTimestamp="2024-07-02 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:13:39.811100901 +0000 UTC m=+46.812554214" watchObservedRunningTime="2024-07-02 00:13:39.81124455 +0000 UTC m=+46.812697863" Jul 2 00:13:40.307804 kubelet[2573]: E0702 00:13:40.307476 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:40.307804 kubelet[2573]: E0702 00:13:40.307504 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:40.990792 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:36076.service - OpenSSH per-connection server daemon (10.0.0.1:36076). Jul 2 00:13:41.035605 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 36076 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:41.037956 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:41.043149 systemd-logind[1432]: New session 11 of user core. Jul 2 00:13:41.052601 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:13:41.189135 sshd[4001]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:41.193251 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:36076.service: Deactivated successfully. Jul 2 00:13:41.195302 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:13:41.195982 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:13:41.196863 systemd-logind[1432]: Removed session 11. Jul 2 00:13:41.311578 kubelet[2573]: E0702 00:13:41.310291 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:41.311578 kubelet[2573]: E0702 00:13:41.311032 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:46.206705 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:36082.service - OpenSSH per-connection server daemon (10.0.0.1:36082). Jul 2 00:13:46.246172 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 36082 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:46.248106 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:46.252271 systemd-logind[1432]: New session 12 of user core. Jul 2 00:13:46.264629 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:13:46.414059 sshd[4016]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:46.418236 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:36082.service: Deactivated successfully. Jul 2 00:13:46.420324 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:13:46.421071 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:13:46.421940 systemd-logind[1432]: Removed session 12. Jul 2 00:13:51.429655 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:39736.service - OpenSSH per-connection server daemon (10.0.0.1:39736). Jul 2 00:13:51.472291 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 39736 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:51.474298 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:51.478906 systemd-logind[1432]: New session 13 of user core. Jul 2 00:13:51.486024 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:13:51.617592 sshd[4031]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:51.630965 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:39736.service: Deactivated successfully. Jul 2 00:13:51.633161 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:13:51.635010 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:13:51.637081 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:39738.service - OpenSSH per-connection server daemon (10.0.0.1:39738). Jul 2 00:13:51.637929 systemd-logind[1432]: Removed session 13. Jul 2 00:13:51.675336 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 39738 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:51.677182 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:51.681726 systemd-logind[1432]: New session 14 of user core. Jul 2 00:13:51.688681 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:13:51.872072 sshd[4047]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:51.883352 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:39738.service: Deactivated successfully. Jul 2 00:13:51.885813 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:13:51.887546 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:13:51.893027 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:39750.service - OpenSSH per-connection server daemon (10.0.0.1:39750). Jul 2 00:13:51.894339 systemd-logind[1432]: Removed session 14. Jul 2 00:13:51.930777 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 39750 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:51.932670 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:51.938454 systemd-logind[1432]: New session 15 of user core. Jul 2 00:13:51.948692 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:13:52.539515 sshd[4060]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:52.543382 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:39750.service: Deactivated successfully. Jul 2 00:13:52.545406 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:13:52.546160 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:13:52.547187 systemd-logind[1432]: Removed session 15. Jul 2 00:13:57.553890 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:39758.service - OpenSSH per-connection server daemon (10.0.0.1:39758). Jul 2 00:13:57.592393 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 39758 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:13:57.594081 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:57.598878 systemd-logind[1432]: New session 16 of user core. Jul 2 00:13:57.608583 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:13:57.766474 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:57.771220 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:39758.service: Deactivated successfully. Jul 2 00:13:57.773406 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:13:57.774276 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:13:57.775203 systemd-logind[1432]: Removed session 16. Jul 2 00:14:02.783513 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:44654.service - OpenSSH per-connection server daemon (10.0.0.1:44654). Jul 2 00:14:02.823302 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 44654 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:02.825372 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:02.830233 systemd-logind[1432]: New session 17 of user core. Jul 2 00:14:02.840896 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:14:02.966058 sshd[4092]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:02.970576 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:44654.service: Deactivated successfully. Jul 2 00:14:02.972862 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:14:02.973501 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:14:02.974428 systemd-logind[1432]: Removed session 17. Jul 2 00:14:07.981263 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:44660.service - OpenSSH per-connection server daemon (10.0.0.1:44660). Jul 2 00:14:08.017344 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 44660 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:08.018889 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:08.023762 systemd-logind[1432]: New session 18 of user core. Jul 2 00:14:08.037583 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:14:08.092493 kubelet[2573]: E0702 00:14:08.091969 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:08.240082 sshd[4106]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:08.256024 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:44660.service: Deactivated successfully. Jul 2 00:14:08.257995 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:14:08.259785 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:14:08.265747 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:59442.service - OpenSSH per-connection server daemon (10.0.0.1:59442). Jul 2 00:14:08.266949 systemd-logind[1432]: Removed session 18. Jul 2 00:14:08.300208 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 59442 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:08.302053 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:08.306939 systemd-logind[1432]: New session 19 of user core. Jul 2 00:14:08.322804 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:14:08.760122 sshd[4120]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:08.772693 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:59442.service: Deactivated successfully. Jul 2 00:14:08.774731 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:14:08.777373 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:14:08.789890 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). Jul 2 00:14:08.791298 systemd-logind[1432]: Removed session 19. Jul 2 00:14:08.824064 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:08.825561 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:08.830207 systemd-logind[1432]: New session 20 of user core. Jul 2 00:14:08.842594 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:14:10.875563 sshd[4135]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:10.885063 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:59444.service: Deactivated successfully. Jul 2 00:14:10.887306 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:14:10.889146 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:14:10.894784 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). Jul 2 00:14:10.896141 systemd-logind[1432]: Removed session 20. Jul 2 00:14:10.927416 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:10.929212 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:10.934026 systemd-logind[1432]: New session 21 of user core. Jul 2 00:14:10.940631 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:14:11.425640 sshd[4154]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:11.434569 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:59450.service: Deactivated successfully. Jul 2 00:14:11.436741 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:14:11.438708 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:14:11.446083 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:59452.service - OpenSSH per-connection server daemon (10.0.0.1:59452). Jul 2 00:14:11.447307 systemd-logind[1432]: Removed session 21. Jul 2 00:14:11.480268 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 59452 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:11.481952 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:11.486291 systemd-logind[1432]: New session 22 of user core. Jul 2 00:14:11.497710 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:14:11.606636 sshd[4166]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:11.610818 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:59452.service: Deactivated successfully. Jul 2 00:14:11.612843 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:14:11.613472 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:14:11.614268 systemd-logind[1432]: Removed session 22. Jul 2 00:14:16.092650 kubelet[2573]: E0702 00:14:16.092582 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:16.618666 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:59468.service - OpenSSH per-connection server daemon (10.0.0.1:59468). Jul 2 00:14:16.660094 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 59468 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:16.661910 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:16.666356 systemd-logind[1432]: New session 23 of user core. Jul 2 00:14:16.675616 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:14:16.787073 sshd[4180]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:16.791825 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:59468.service: Deactivated successfully. Jul 2 00:14:16.794130 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:14:16.794861 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:14:16.795923 systemd-logind[1432]: Removed session 23. Jul 2 00:14:21.800635 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:59042.service - OpenSSH per-connection server daemon (10.0.0.1:59042). Jul 2 00:14:21.837646 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 59042 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:21.839761 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:21.844641 systemd-logind[1432]: New session 24 of user core. Jul 2 00:14:21.859585 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:14:21.976745 sshd[4195]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:21.981327 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:59042.service: Deactivated successfully. Jul 2 00:14:21.983668 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:14:21.985142 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:14:21.986118 systemd-logind[1432]: Removed session 24. Jul 2 00:14:26.988971 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:59058.service - OpenSSH per-connection server daemon (10.0.0.1:59058). Jul 2 00:14:27.032768 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 59058 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:27.034892 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:27.039682 systemd-logind[1432]: New session 25 of user core. Jul 2 00:14:27.051596 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:14:27.163373 sshd[4212]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:27.167756 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:59058.service: Deactivated successfully. Jul 2 00:14:27.169666 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:14:27.170295 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:14:27.171533 systemd-logind[1432]: Removed session 25. Jul 2 00:14:29.092183 kubelet[2573]: E0702 00:14:29.092103 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:32.091851 kubelet[2573]: E0702 00:14:32.091807 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:32.174567 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:41432.service - OpenSSH per-connection server daemon (10.0.0.1:41432). Jul 2 00:14:32.210189 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 41432 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:32.257822 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:32.261958 systemd-logind[1432]: New session 26 of user core. Jul 2 00:14:32.272664 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:14:32.376126 sshd[4226]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:32.380232 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:41432.service: Deactivated successfully. Jul 2 00:14:32.382328 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:14:32.383003 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:14:32.383847 systemd-logind[1432]: Removed session 26. Jul 2 00:14:37.393229 systemd[1]: Started sshd@26-10.0.0.33:22-10.0.0.1:41438.service - OpenSSH per-connection server daemon (10.0.0.1:41438). Jul 2 00:14:37.429015 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:37.435675 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:37.440350 systemd-logind[1432]: New session 27 of user core. Jul 2 00:14:37.448584 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:14:37.594792 sshd[4240]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:37.599732 systemd[1]: sshd@26-10.0.0.33:22-10.0.0.1:41438.service: Deactivated successfully. Jul 2 00:14:37.602679 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:14:37.603423 systemd-logind[1432]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:14:37.604355 systemd-logind[1432]: Removed session 27. Jul 2 00:14:42.091392 kubelet[2573]: E0702 00:14:42.091317 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:42.606588 systemd[1]: Started sshd@27-10.0.0.33:22-10.0.0.1:51528.service - OpenSSH per-connection server daemon (10.0.0.1:51528). Jul 2 00:14:42.643051 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 51528 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:42.644622 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:42.648868 systemd-logind[1432]: New session 28 of user core. Jul 2 00:14:42.659614 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:14:42.800530 sshd[4256]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:42.813112 systemd[1]: sshd@27-10.0.0.33:22-10.0.0.1:51528.service: Deactivated successfully. Jul 2 00:14:42.815403 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:14:42.817531 systemd-logind[1432]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:14:42.827121 systemd[1]: Started sshd@28-10.0.0.33:22-10.0.0.1:51538.service - OpenSSH per-connection server daemon (10.0.0.1:51538). Jul 2 00:14:42.828522 systemd-logind[1432]: Removed session 28. Jul 2 00:14:42.858569 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 51538 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:42.860486 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:42.865955 systemd-logind[1432]: New session 29 of user core. Jul 2 00:14:42.875731 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:14:44.415945 containerd[1440]: time="2024-07-02T00:14:44.415861601Z" level=info msg="StopContainer for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" with timeout 30 (s)" Jul 2 00:14:44.416512 containerd[1440]: time="2024-07-02T00:14:44.416413423Z" level=info msg="Stop container \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" with signal terminated" Jul 2 00:14:44.430932 containerd[1440]: time="2024-07-02T00:14:44.430593818Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:14:44.430670 systemd[1]: cri-containerd-93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff.scope: Deactivated successfully. Jul 2 00:14:44.434366 containerd[1440]: time="2024-07-02T00:14:44.434331819Z" level=info msg="StopContainer for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" with timeout 2 (s)" Jul 2 00:14:44.434617 containerd[1440]: time="2024-07-02T00:14:44.434583875Z" level=info msg="Stop container \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" with signal terminated" Jul 2 00:14:44.442038 systemd-networkd[1385]: lxc_health: Link DOWN Jul 2 00:14:44.442047 systemd-networkd[1385]: lxc_health: Lost carrier Jul 2 00:14:44.454107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff-rootfs.mount: Deactivated successfully. Jul 2 00:14:44.467054 systemd[1]: cri-containerd-38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340.scope: Deactivated successfully. Jul 2 00:14:44.467395 systemd[1]: cri-containerd-38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340.scope: Consumed 7.606s CPU time. Jul 2 00:14:44.490049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340-rootfs.mount: Deactivated successfully. Jul 2 00:14:44.566347 containerd[1440]: time="2024-07-02T00:14:44.566271566Z" level=info msg="shim disconnected" id=93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff namespace=k8s.io Jul 2 00:14:44.566347 containerd[1440]: time="2024-07-02T00:14:44.566338803Z" level=warning msg="cleaning up after shim disconnected" id=93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff namespace=k8s.io Jul 2 00:14:44.566347 containerd[1440]: time="2024-07-02T00:14:44.566347369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:44.566705 containerd[1440]: time="2024-07-02T00:14:44.566363490Z" level=info msg="shim disconnected" id=38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340 namespace=k8s.io Jul 2 00:14:44.566705 containerd[1440]: time="2024-07-02T00:14:44.566406932Z" level=warning msg="cleaning up after shim disconnected" id=38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340 namespace=k8s.io Jul 2 00:14:44.566705 containerd[1440]: time="2024-07-02T00:14:44.566415037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:44.580312 containerd[1440]: time="2024-07-02T00:14:44.580238348Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:14:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:14:44.666247 containerd[1440]: time="2024-07-02T00:14:44.666060956Z" level=info msg="StopContainer for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" returns successfully" Jul 2 00:14:44.667411 containerd[1440]: time="2024-07-02T00:14:44.667358808Z" level=info msg="StopContainer for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" returns successfully" Jul 2 00:14:44.669517 containerd[1440]: time="2024-07-02T00:14:44.669475175Z" level=info msg="StopPodSandbox for \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\"" Jul 2 00:14:44.670466 containerd[1440]: time="2024-07-02T00:14:44.670391987Z" level=info msg="StopPodSandbox for \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\"" Jul 2 00:14:44.679636 containerd[1440]: time="2024-07-02T00:14:44.669529248Z" level=info msg="Container to stop \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:14:44.679636 containerd[1440]: time="2024-07-02T00:14:44.679615419Z" level=info msg="Container to stop \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:14:44.679636 containerd[1440]: time="2024-07-02T00:14:44.679627752Z" level=info msg="Container to stop \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:14:44.679636 containerd[1440]: time="2024-07-02T00:14:44.679638592Z" level=info msg="Container to stop \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:14:44.679636 containerd[1440]: time="2024-07-02T00:14:44.679647800Z" level=info msg="Container to stop \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:14:44.681991 containerd[1440]: time="2024-07-02T00:14:44.670462901Z" level=info msg="Container to stop \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:14:44.682248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732-shm.mount: Deactivated successfully. Jul 2 00:14:44.686978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0-shm.mount: Deactivated successfully. Jul 2 00:14:44.691010 systemd[1]: cri-containerd-52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0.scope: Deactivated successfully. Jul 2 00:14:44.692320 systemd[1]: cri-containerd-3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732.scope: Deactivated successfully. Jul 2 00:14:44.802175 containerd[1440]: time="2024-07-02T00:14:44.801880321Z" level=info msg="shim disconnected" id=52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0 namespace=k8s.io Jul 2 00:14:44.802175 containerd[1440]: time="2024-07-02T00:14:44.801944222Z" level=warning msg="cleaning up after shim disconnected" id=52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0 namespace=k8s.io Jul 2 00:14:44.802175 containerd[1440]: time="2024-07-02T00:14:44.801953219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:44.802175 containerd[1440]: time="2024-07-02T00:14:44.801887946Z" level=info msg="shim disconnected" id=3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732 namespace=k8s.io Jul 2 00:14:44.802175 containerd[1440]: time="2024-07-02T00:14:44.802058699Z" level=warning msg="cleaning up after shim disconnected" id=3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732 namespace=k8s.io Jul 2 00:14:44.802175 containerd[1440]: time="2024-07-02T00:14:44.802067766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:44.816869 containerd[1440]: time="2024-07-02T00:14:44.816804061Z" level=info msg="TearDown network for sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" successfully" Jul 2 00:14:44.816869 containerd[1440]: time="2024-07-02T00:14:44.816844818Z" level=info msg="StopPodSandbox for \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" returns successfully" Jul 2 00:14:44.818866 containerd[1440]: time="2024-07-02T00:14:44.818829627Z" level=info msg="TearDown network for sandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" successfully" Jul 2 00:14:44.818866 containerd[1440]: time="2024-07-02T00:14:44.818850025Z" level=info msg="StopPodSandbox for \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" returns successfully" Jul 2 00:14:44.999778 kubelet[2573]: I0702 00:14:44.999603 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rr6w\" (UniqueName: \"kubernetes.io/projected/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-kube-api-access-4rr6w\") pod \"c296ce57-b4aa-4de5-b017-39dc5c8f4eea\" (UID: \"c296ce57-b4aa-4de5-b017-39dc5c8f4eea\") " Jul 2 00:14:44.999778 kubelet[2573]: I0702 00:14:44.999661 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-run\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:44.999778 kubelet[2573]: I0702 00:14:44.999681 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fe7b374-3586-4562-8303-96dd51800931-cilium-config-path\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:44.999778 kubelet[2573]: I0702 00:14:44.999698 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-bpf-maps\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:44.999778 kubelet[2573]: I0702 00:14:44.999712 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-xtables-lock\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:44.999778 kubelet[2573]: I0702 00:14:44.999727 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fe7b374-3586-4562-8303-96dd51800931-clustermesh-secrets\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000531 kubelet[2573]: I0702 00:14:44.999742 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-cilium-config-path\") pod \"c296ce57-b4aa-4de5-b017-39dc5c8f4eea\" (UID: \"c296ce57-b4aa-4de5-b017-39dc5c8f4eea\") " Jul 2 00:14:45.000531 kubelet[2573]: I0702 00:14:44.999757 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdzkh\" (UniqueName: \"kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-kube-api-access-rdzkh\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000531 kubelet[2573]: I0702 00:14:44.999770 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-hostproc\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000531 kubelet[2573]: I0702 00:14:44.999782 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-net\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000531 kubelet[2573]: I0702 00:14:44.999785 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.000710 kubelet[2573]: I0702 00:14:44.999820 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.000710 kubelet[2573]: I0702 00:14:44.999801 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-lib-modules\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000710 kubelet[2573]: I0702 00:14:44.999860 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cni-path\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000710 kubelet[2573]: I0702 00:14:44.999883 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-etc-cni-netd\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000710 kubelet[2573]: I0702 00:14:44.999904 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-cgroup\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000710 kubelet[2573]: I0702 00:14:44.999961 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-hubble-tls\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000915 kubelet[2573]: I0702 00:14:44.999984 2573 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-kernel\") pod \"0fe7b374-3586-4562-8303-96dd51800931\" (UID: \"0fe7b374-3586-4562-8303-96dd51800931\") " Jul 2 00:14:45.000915 kubelet[2573]: I0702 00:14:45.000035 2573 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.000915 kubelet[2573]: I0702 00:14:45.000050 2573 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.000915 kubelet[2573]: I0702 00:14:45.000076 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.000915 kubelet[2573]: I0702 00:14:45.000099 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cni-path" (OuterVolumeSpecName: "cni-path") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.000915 kubelet[2573]: I0702 00:14:45.000112 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.001139 kubelet[2573]: I0702 00:14:45.000125 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.004432 kubelet[2573]: I0702 00:14:45.003275 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe7b374-3586-4562-8303-96dd51800931-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:14:45.004432 kubelet[2573]: I0702 00:14:45.003651 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c296ce57-b4aa-4de5-b017-39dc5c8f4eea" (UID: "c296ce57-b4aa-4de5-b017-39dc5c8f4eea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:14:45.004432 kubelet[2573]: I0702 00:14:45.003679 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.004432 kubelet[2573]: I0702 00:14:45.003699 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.004432 kubelet[2573]: I0702 00:14:45.004362 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-kube-api-access-4rr6w" (OuterVolumeSpecName: "kube-api-access-4rr6w") pod "c296ce57-b4aa-4de5-b017-39dc5c8f4eea" (UID: "c296ce57-b4aa-4de5-b017-39dc5c8f4eea"). InnerVolumeSpecName "kube-api-access-4rr6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:14:45.004665 kubelet[2573]: I0702 00:14:45.004459 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-hostproc" (OuterVolumeSpecName: "hostproc") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.004665 kubelet[2573]: I0702 00:14:45.004487 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:14:45.004665 kubelet[2573]: I0702 00:14:45.004534 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:14:45.006475 kubelet[2573]: I0702 00:14:45.006428 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-kube-api-access-rdzkh" (OuterVolumeSpecName: "kube-api-access-rdzkh") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "kube-api-access-rdzkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:14:45.006529 kubelet[2573]: I0702 00:14:45.006472 2573 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe7b374-3586-4562-8303-96dd51800931-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0fe7b374-3586-4562-8303-96dd51800931" (UID: "0fe7b374-3586-4562-8303-96dd51800931"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:14:45.099543 systemd[1]: Removed slice kubepods-burstable-pod0fe7b374_3586_4562_8303_96dd51800931.slice - libcontainer container kubepods-burstable-pod0fe7b374_3586_4562_8303_96dd51800931.slice. Jul 2 00:14:45.099645 systemd[1]: kubepods-burstable-pod0fe7b374_3586_4562_8303_96dd51800931.slice: Consumed 7.719s CPU time. Jul 2 00:14:45.100541 kubelet[2573]: I0702 00:14:45.100517 2573 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100590 systemd[1]: Removed slice kubepods-besteffort-podc296ce57_b4aa_4de5_b017_39dc5c8f4eea.slice - libcontainer container kubepods-besteffort-podc296ce57_b4aa_4de5_b017_39dc5c8f4eea.slice. Jul 2 00:14:45.100697 kubelet[2573]: I0702 00:14:45.100669 2573 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100697 kubelet[2573]: I0702 00:14:45.100686 2573 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100697 kubelet[2573]: I0702 00:14:45.100696 2573 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4rr6w\" (UniqueName: \"kubernetes.io/projected/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-kube-api-access-4rr6w\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100697 kubelet[2573]: I0702 00:14:45.100706 2573 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fe7b374-3586-4562-8303-96dd51800931-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100716 2573 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100726 2573 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100734 2573 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fe7b374-3586-4562-8303-96dd51800931-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100743 2573 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c296ce57-b4aa-4de5-b017-39dc5c8f4eea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100752 2573 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100762 2573 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rdzkh\" (UniqueName: \"kubernetes.io/projected/0fe7b374-3586-4562-8303-96dd51800931-kube-api-access-rdzkh\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100770 2573 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.100924 kubelet[2573]: I0702 00:14:45.100778 2573 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.101107 kubelet[2573]: I0702 00:14:45.100786 2573 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fe7b374-3586-4562-8303-96dd51800931-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:14:45.408141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0-rootfs.mount: Deactivated successfully. Jul 2 00:14:45.408289 systemd[1]: var-lib-kubelet-pods-c296ce57\x2db4aa\x2d4de5\x2db017\x2d39dc5c8f4eea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4rr6w.mount: Deactivated successfully. Jul 2 00:14:45.408398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732-rootfs.mount: Deactivated successfully. Jul 2 00:14:45.408513 systemd[1]: var-lib-kubelet-pods-0fe7b374\x2d3586\x2d4562\x2d8303\x2d96dd51800931-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:14:45.408631 systemd[1]: var-lib-kubelet-pods-0fe7b374\x2d3586\x2d4562\x2d8303\x2d96dd51800931-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drdzkh.mount: Deactivated successfully. Jul 2 00:14:45.408738 systemd[1]: var-lib-kubelet-pods-0fe7b374\x2d3586\x2d4562\x2d8303\x2d96dd51800931-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:14:45.482391 kubelet[2573]: I0702 00:14:45.482344 2573 scope.go:117] "RemoveContainer" containerID="38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340" Jul 2 00:14:45.485764 containerd[1440]: time="2024-07-02T00:14:45.485720689Z" level=info msg="RemoveContainer for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\"" Jul 2 00:14:45.569358 containerd[1440]: time="2024-07-02T00:14:45.569312677Z" level=info msg="RemoveContainer for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" returns successfully" Jul 2 00:14:45.569635 kubelet[2573]: I0702 00:14:45.569596 2573 scope.go:117] "RemoveContainer" containerID="f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391" Jul 2 00:14:45.570467 containerd[1440]: time="2024-07-02T00:14:45.570437171Z" level=info msg="RemoveContainer for \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\"" Jul 2 00:14:45.676381 containerd[1440]: time="2024-07-02T00:14:45.676247932Z" level=info msg="RemoveContainer for \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\" returns successfully" Jul 2 00:14:45.677100 kubelet[2573]: I0702 00:14:45.676668 2573 scope.go:117] "RemoveContainer" containerID="5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a" Jul 2 00:14:45.677820 containerd[1440]: time="2024-07-02T00:14:45.677798100Z" level=info msg="RemoveContainer for \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\"" Jul 2 00:14:45.714094 containerd[1440]: time="2024-07-02T00:14:45.714041151Z" level=info msg="RemoveContainer for \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\" returns successfully" Jul 2 00:14:45.714490 kubelet[2573]: I0702 00:14:45.714466 2573 scope.go:117] "RemoveContainer" containerID="ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020" Jul 2 00:14:45.715643 containerd[1440]: time="2024-07-02T00:14:45.715606477Z" level=info msg="RemoveContainer for \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\"" Jul 2 00:14:45.749521 containerd[1440]: time="2024-07-02T00:14:45.749462259Z" level=info msg="RemoveContainer for \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\" returns successfully" Jul 2 00:14:45.749896 kubelet[2573]: I0702 00:14:45.749798 2573 scope.go:117] "RemoveContainer" containerID="a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6" Jul 2 00:14:45.751272 containerd[1440]: time="2024-07-02T00:14:45.751241811Z" level=info msg="RemoveContainer for \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\"" Jul 2 00:14:45.784584 containerd[1440]: time="2024-07-02T00:14:45.784541092Z" level=info msg="RemoveContainer for \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\" returns successfully" Jul 2 00:14:45.785004 kubelet[2573]: I0702 00:14:45.784868 2573 scope.go:117] "RemoveContainer" containerID="38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340" Jul 2 00:14:45.785317 containerd[1440]: time="2024-07-02T00:14:45.785242857Z" level=error msg="ContainerStatus for \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\": not found" Jul 2 00:14:45.790938 kubelet[2573]: E0702 00:14:45.790900 2573 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\": not found" containerID="38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340" Jul 2 00:14:45.791128 kubelet[2573]: I0702 00:14:45.790936 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340"} err="failed to get container status \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\": rpc error: code = NotFound desc = an error occurred when try to find container \"38bf8b58adf1f0061538c72fe8dc18d15f55947ed130d2a42f447eb41ffa7340\": not found" Jul 2 00:14:45.791128 kubelet[2573]: I0702 00:14:45.791034 2573 scope.go:117] "RemoveContainer" containerID="f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391" Jul 2 00:14:45.791421 containerd[1440]: time="2024-07-02T00:14:45.791363046Z" level=error msg="ContainerStatus for \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\": not found" Jul 2 00:14:45.791679 kubelet[2573]: E0702 00:14:45.791637 2573 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\": not found" containerID="f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391" Jul 2 00:14:45.791747 kubelet[2573]: I0702 00:14:45.791682 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391"} err="failed to get container status \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8cff1baa49d872cecfa71ddf461825463ba8a92c95015e4030b02fb8c867391\": not found" Jul 2 00:14:45.791747 kubelet[2573]: I0702 00:14:45.791713 2573 scope.go:117] "RemoveContainer" containerID="5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a" Jul 2 00:14:45.792082 containerd[1440]: time="2024-07-02T00:14:45.792029945Z" level=error msg="ContainerStatus for \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\": not found" Jul 2 00:14:45.792235 kubelet[2573]: E0702 00:14:45.792212 2573 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\": not found" containerID="5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a" Jul 2 00:14:45.792235 kubelet[2573]: I0702 00:14:45.792233 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a"} err="failed to get container status \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5535d8ce67b28375d1bbdc81f2d170b8845466c7a76788b781d9ec587cd1d70a\": not found" Jul 2 00:14:45.792324 kubelet[2573]: I0702 00:14:45.792246 2573 scope.go:117] "RemoveContainer" containerID="ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020" Jul 2 00:14:45.792428 containerd[1440]: time="2024-07-02T00:14:45.792393683Z" level=error msg="ContainerStatus for \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\": not found" Jul 2 00:14:45.792533 kubelet[2573]: E0702 00:14:45.792512 2573 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\": not found" containerID="ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020" Jul 2 00:14:45.792650 kubelet[2573]: I0702 00:14:45.792531 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020"} err="failed to get container status \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca5b24a817736e40d3d79994ab5353e464a1c0194e5611341b62fc0abb3a7020\": not found" Jul 2 00:14:45.792650 kubelet[2573]: I0702 00:14:45.792544 2573 scope.go:117] "RemoveContainer" containerID="a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6" Jul 2 00:14:45.792729 containerd[1440]: time="2024-07-02T00:14:45.792660146Z" level=error msg="ContainerStatus for \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\": not found" Jul 2 00:14:45.792800 kubelet[2573]: E0702 00:14:45.792753 2573 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\": not found" containerID="a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6" Jul 2 00:14:45.792800 kubelet[2573]: I0702 00:14:45.792788 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6"} err="failed to get container status \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a42fd913843e4424405f02e2e0151c6e2d2936041927fdf9d8dab30c31191ca6\": not found" Jul 2 00:14:45.792884 kubelet[2573]: I0702 00:14:45.792809 2573 scope.go:117] "RemoveContainer" containerID="93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff" Jul 2 00:14:45.793851 containerd[1440]: time="2024-07-02T00:14:45.793817462Z" level=info msg="RemoveContainer for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\"" Jul 2 00:14:45.839250 containerd[1440]: time="2024-07-02T00:14:45.839178633Z" level=info msg="RemoveContainer for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" returns successfully" Jul 2 00:14:45.839599 kubelet[2573]: I0702 00:14:45.839554 2573 scope.go:117] "RemoveContainer" containerID="93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff" Jul 2 00:14:45.839932 containerd[1440]: time="2024-07-02T00:14:45.839857977Z" level=error msg="ContainerStatus for \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\": not found" Jul 2 00:14:45.840148 kubelet[2573]: E0702 00:14:45.840020 2573 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\": not found" containerID="93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff" Jul 2 00:14:45.840148 kubelet[2573]: I0702 00:14:45.840052 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff"} err="failed to get container status \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\": rpc error: code = NotFound desc = an error occurred when try to find container \"93efbb781759ed9522a9f9506fe98da902b5766738261a60fcb919d34e504dff\": not found" Jul 2 00:14:46.295670 sshd[4271]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:46.306868 systemd[1]: sshd@28-10.0.0.33:22-10.0.0.1:51538.service: Deactivated successfully. Jul 2 00:14:46.309090 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:14:46.311054 systemd-logind[1432]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:14:46.324000 systemd[1]: Started sshd@29-10.0.0.33:22-10.0.0.1:51552.service - OpenSSH per-connection server daemon (10.0.0.1:51552). Jul 2 00:14:46.325103 systemd-logind[1432]: Removed session 29. Jul 2 00:14:46.371701 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 51552 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:46.373457 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:46.378097 systemd-logind[1432]: New session 30 of user core. Jul 2 00:14:46.394613 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:14:47.044806 sshd[4437]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:47.051560 systemd[1]: sshd@29-10.0.0.33:22-10.0.0.1:51552.service: Deactivated successfully. Jul 2 00:14:47.056760 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:14:47.057696 systemd-logind[1432]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:14:47.072547 systemd[1]: Started sshd@30-10.0.0.33:22-10.0.0.1:51558.service - OpenSSH per-connection server daemon (10.0.0.1:51558). Jul 2 00:14:47.074142 systemd-logind[1432]: Removed session 30. Jul 2 00:14:47.094547 kubelet[2573]: I0702 00:14:47.094502 2573 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe7b374-3586-4562-8303-96dd51800931" path="/var/lib/kubelet/pods/0fe7b374-3586-4562-8303-96dd51800931/volumes" Jul 2 00:14:47.095478 kubelet[2573]: I0702 00:14:47.095432 2573 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c296ce57-b4aa-4de5-b017-39dc5c8f4eea" path="/var/lib/kubelet/pods/c296ce57-b4aa-4de5-b017-39dc5c8f4eea/volumes" Jul 2 00:14:47.107495 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 51558 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:47.107690 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:47.112399 kubelet[2573]: I0702 00:14:47.111526 2573 topology_manager.go:215] "Topology Admit Handler" podUID="a680d635-a365-4d72-ae55-152f1cffa2b9" podNamespace="kube-system" podName="cilium-smqv5" Jul 2 00:14:47.112399 kubelet[2573]: E0702 00:14:47.111616 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe7b374-3586-4562-8303-96dd51800931" containerName="mount-cgroup" Jul 2 00:14:47.112399 kubelet[2573]: E0702 00:14:47.111628 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe7b374-3586-4562-8303-96dd51800931" containerName="apply-sysctl-overwrites" Jul 2 00:14:47.112399 kubelet[2573]: E0702 00:14:47.111635 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe7b374-3586-4562-8303-96dd51800931" containerName="mount-bpf-fs" Jul 2 00:14:47.112399 kubelet[2573]: E0702 00:14:47.111644 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c296ce57-b4aa-4de5-b017-39dc5c8f4eea" containerName="cilium-operator" Jul 2 00:14:47.112399 kubelet[2573]: E0702 00:14:47.111653 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe7b374-3586-4562-8303-96dd51800931" containerName="clean-cilium-state" Jul 2 00:14:47.112399 kubelet[2573]: E0702 00:14:47.111662 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fe7b374-3586-4562-8303-96dd51800931" containerName="cilium-agent" Jul 2 00:14:47.112399 kubelet[2573]: I0702 00:14:47.111721 2573 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe7b374-3586-4562-8303-96dd51800931" containerName="cilium-agent" Jul 2 00:14:47.112399 kubelet[2573]: I0702 00:14:47.111731 2573 memory_manager.go:354] "RemoveStaleState removing state" podUID="c296ce57-b4aa-4de5-b017-39dc5c8f4eea" containerName="cilium-operator" Jul 2 00:14:47.116196 systemd-logind[1432]: New session 31 of user core. Jul 2 00:14:47.128518 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 2 00:14:47.132341 systemd[1]: Created slice kubepods-burstable-poda680d635_a365_4d72_ae55_152f1cffa2b9.slice - libcontainer container kubepods-burstable-poda680d635_a365_4d72_ae55_152f1cffa2b9.slice. Jul 2 00:14:47.184313 sshd[4450]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:47.200113 systemd[1]: sshd@30-10.0.0.33:22-10.0.0.1:51558.service: Deactivated successfully. Jul 2 00:14:47.202043 systemd[1]: session-31.scope: Deactivated successfully. Jul 2 00:14:47.203870 systemd-logind[1432]: Session 31 logged out. Waiting for processes to exit. Jul 2 00:14:47.209732 systemd[1]: Started sshd@31-10.0.0.33:22-10.0.0.1:51572.service - OpenSSH per-connection server daemon (10.0.0.1:51572). Jul 2 00:14:47.210664 systemd-logind[1432]: Removed session 31. Jul 2 00:14:47.214479 kubelet[2573]: I0702 00:14:47.214435 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-bpf-maps\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214570 kubelet[2573]: I0702 00:14:47.214484 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tdw\" (UniqueName: \"kubernetes.io/projected/a680d635-a365-4d72-ae55-152f1cffa2b9-kube-api-access-f5tdw\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214570 kubelet[2573]: I0702 00:14:47.214502 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a680d635-a365-4d72-ae55-152f1cffa2b9-cilium-ipsec-secrets\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214570 kubelet[2573]: I0702 00:14:47.214519 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-hostproc\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214570 kubelet[2573]: I0702 00:14:47.214539 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-lib-modules\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214570 kubelet[2573]: I0702 00:14:47.214553 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-etc-cni-netd\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214570 kubelet[2573]: I0702 00:14:47.214566 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-host-proc-sys-net\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214754 kubelet[2573]: I0702 00:14:47.214579 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-host-proc-sys-kernel\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214754 kubelet[2573]: I0702 00:14:47.214594 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a680d635-a365-4d72-ae55-152f1cffa2b9-clustermesh-secrets\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214754 kubelet[2573]: I0702 00:14:47.214609 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-cni-path\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214754 kubelet[2573]: I0702 00:14:47.214621 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-cilium-cgroup\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214754 kubelet[2573]: I0702 00:14:47.214633 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-xtables-lock\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214754 kubelet[2573]: I0702 00:14:47.214645 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a680d635-a365-4d72-ae55-152f1cffa2b9-hubble-tls\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214899 kubelet[2573]: I0702 00:14:47.214661 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a680d635-a365-4d72-ae55-152f1cffa2b9-cilium-run\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.214899 kubelet[2573]: I0702 00:14:47.214673 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a680d635-a365-4d72-ae55-152f1cffa2b9-cilium-config-path\") pod \"cilium-smqv5\" (UID: \"a680d635-a365-4d72-ae55-152f1cffa2b9\") " pod="kube-system/cilium-smqv5" Jul 2 00:14:47.242074 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 51572 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:14:47.243747 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:47.248316 systemd-logind[1432]: New session 32 of user core. Jul 2 00:14:47.261566 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 2 00:14:47.435516 kubelet[2573]: E0702 00:14:47.435469 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:47.436115 containerd[1440]: time="2024-07-02T00:14:47.436051391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-smqv5,Uid:a680d635-a365-4d72-ae55-152f1cffa2b9,Namespace:kube-system,Attempt:0,}" Jul 2 00:14:47.605506 containerd[1440]: time="2024-07-02T00:14:47.605363088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:14:47.605506 containerd[1440]: time="2024-07-02T00:14:47.605484256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:47.605653 containerd[1440]: time="2024-07-02T00:14:47.605513031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:14:47.605653 containerd[1440]: time="2024-07-02T00:14:47.605531385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:47.628581 systemd[1]: Started cri-containerd-e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851.scope - libcontainer container e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851. Jul 2 00:14:47.651478 containerd[1440]: time="2024-07-02T00:14:47.651403825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-smqv5,Uid:a680d635-a365-4d72-ae55-152f1cffa2b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\"" Jul 2 00:14:47.652352 kubelet[2573]: E0702 00:14:47.652317 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:47.654737 containerd[1440]: time="2024-07-02T00:14:47.654697665Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:14:47.927427 containerd[1440]: time="2024-07-02T00:14:47.927355687Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca\"" Jul 2 00:14:47.928022 containerd[1440]: time="2024-07-02T00:14:47.927954548Z" level=info msg="StartContainer for \"aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca\"" Jul 2 00:14:47.957685 systemd[1]: Started cri-containerd-aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca.scope - libcontainer container aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca. Jul 2 00:14:48.016512 containerd[1440]: time="2024-07-02T00:14:48.016408019Z" level=info msg="StartContainer for \"aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca\" returns successfully" Jul 2 00:14:48.025269 systemd[1]: cri-containerd-aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca.scope: Deactivated successfully. Jul 2 00:14:48.142628 containerd[1440]: time="2024-07-02T00:14:48.142555786Z" level=info msg="shim disconnected" id=aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca namespace=k8s.io Jul 2 00:14:48.142628 containerd[1440]: time="2024-07-02T00:14:48.142621410Z" level=warning msg="cleaning up after shim disconnected" id=aa631845c8a841feeaa756ea2cbdb87fc30ae257f8712c6c820cfabe1d8898ca namespace=k8s.io Jul 2 00:14:48.142628 containerd[1440]: time="2024-07-02T00:14:48.142630618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:48.170407 kubelet[2573]: E0702 00:14:48.170335 2573 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:14:48.495634 kubelet[2573]: E0702 00:14:48.495379 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:48.498869 containerd[1440]: time="2024-07-02T00:14:48.498194360Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:14:48.598480 containerd[1440]: time="2024-07-02T00:14:48.598355082Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78\"" Jul 2 00:14:48.599171 containerd[1440]: time="2024-07-02T00:14:48.599132220Z" level=info msg="StartContainer for \"1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78\"" Jul 2 00:14:48.632642 systemd[1]: Started cri-containerd-1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78.scope - libcontainer container 1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78. Jul 2 00:14:48.673564 systemd[1]: cri-containerd-1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78.scope: Deactivated successfully. Jul 2 00:14:48.731140 containerd[1440]: time="2024-07-02T00:14:48.731076220Z" level=info msg="StartContainer for \"1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78\" returns successfully" Jul 2 00:14:48.967552 containerd[1440]: time="2024-07-02T00:14:48.967479549Z" level=info msg="shim disconnected" id=1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78 namespace=k8s.io Jul 2 00:14:48.967552 containerd[1440]: time="2024-07-02T00:14:48.967541405Z" level=warning msg="cleaning up after shim disconnected" id=1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78 namespace=k8s.io Jul 2 00:14:48.967552 containerd[1440]: time="2024-07-02T00:14:48.967550543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:49.323383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f3272fda6077d06cdf54199e65f08159998c014cc5bd5c685f00d4b64fc6c78-rootfs.mount: Deactivated successfully. Jul 2 00:14:49.499707 kubelet[2573]: E0702 00:14:49.499673 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:49.501398 containerd[1440]: time="2024-07-02T00:14:49.501360388Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:14:49.678457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265397820.mount: Deactivated successfully. Jul 2 00:14:50.006484 containerd[1440]: time="2024-07-02T00:14:50.006230019Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e\"" Jul 2 00:14:50.006855 containerd[1440]: time="2024-07-02T00:14:50.006815144Z" level=info msg="StartContainer for \"77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e\"" Jul 2 00:14:50.037660 systemd[1]: Started cri-containerd-77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e.scope - libcontainer container 77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e. Jul 2 00:14:50.089908 systemd[1]: cri-containerd-77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e.scope: Deactivated successfully. Jul 2 00:14:50.143196 containerd[1440]: time="2024-07-02T00:14:50.143107976Z" level=info msg="StartContainer for \"77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e\" returns successfully" Jul 2 00:14:50.215057 containerd[1440]: time="2024-07-02T00:14:50.214981353Z" level=info msg="shim disconnected" id=77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e namespace=k8s.io Jul 2 00:14:50.215057 containerd[1440]: time="2024-07-02T00:14:50.215047077Z" level=warning msg="cleaning up after shim disconnected" id=77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e namespace=k8s.io Jul 2 00:14:50.215057 containerd[1440]: time="2024-07-02T00:14:50.215058930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:50.324050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77cd9cc882d7695dfbaccc77fc0477def26096c6ee9893a45f4b857e88f6673e-rootfs.mount: Deactivated successfully. Jul 2 00:14:50.503420 kubelet[2573]: E0702 00:14:50.503376 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:50.506292 containerd[1440]: time="2024-07-02T00:14:50.506246461Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:14:50.778866 containerd[1440]: time="2024-07-02T00:14:50.778793424Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a\"" Jul 2 00:14:50.779796 containerd[1440]: time="2024-07-02T00:14:50.779747084Z" level=info msg="StartContainer for \"337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a\"" Jul 2 00:14:50.817766 systemd[1]: Started cri-containerd-337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a.scope - libcontainer container 337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a. Jul 2 00:14:50.847463 systemd[1]: cri-containerd-337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a.scope: Deactivated successfully. Jul 2 00:14:50.890266 containerd[1440]: time="2024-07-02T00:14:50.890212842Z" level=info msg="StartContainer for \"337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a\" returns successfully" Jul 2 00:14:50.992102 containerd[1440]: time="2024-07-02T00:14:50.992010458Z" level=info msg="shim disconnected" id=337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a namespace=k8s.io Jul 2 00:14:50.992102 containerd[1440]: time="2024-07-02T00:14:50.992075300Z" level=warning msg="cleaning up after shim disconnected" id=337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a namespace=k8s.io Jul 2 00:14:50.992102 containerd[1440]: time="2024-07-02T00:14:50.992083726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:51.323544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-337e6d4a0c80b20f1950e98d30770b76606c7961b89708facdcaa65254063d2a-rootfs.mount: Deactivated successfully. Jul 2 00:14:51.507111 kubelet[2573]: E0702 00:14:51.507070 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:51.509287 containerd[1440]: time="2024-07-02T00:14:51.509246157Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:14:51.768909 containerd[1440]: time="2024-07-02T00:14:51.768817269Z" level=info msg="CreateContainer within sandbox \"e95c340367232dc3b2454d5b3d4daff6074d5cfacc5b1ed30ed9028499add851\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9abe111350ef3b917f3f7e7bc2f15f9e4c5afad5156bfde5a48975d20aa26a1\"" Jul 2 00:14:51.769779 containerd[1440]: time="2024-07-02T00:14:51.769613283Z" level=info msg="StartContainer for \"b9abe111350ef3b917f3f7e7bc2f15f9e4c5afad5156bfde5a48975d20aa26a1\"" Jul 2 00:14:51.803699 systemd[1]: Started cri-containerd-b9abe111350ef3b917f3f7e7bc2f15f9e4c5afad5156bfde5a48975d20aa26a1.scope - libcontainer container b9abe111350ef3b917f3f7e7bc2f15f9e4c5afad5156bfde5a48975d20aa26a1. Jul 2 00:14:51.885493 containerd[1440]: time="2024-07-02T00:14:51.885409113Z" level=info msg="StartContainer for \"b9abe111350ef3b917f3f7e7bc2f15f9e4c5afad5156bfde5a48975d20aa26a1\" returns successfully" Jul 2 00:14:52.318474 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:14:52.513390 kubelet[2573]: E0702 00:14:52.513348 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:52.527225 kubelet[2573]: I0702 00:14:52.527144 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-smqv5" podStartSLOduration=5.527115848 podStartE2EDuration="5.527115848s" podCreationTimestamp="2024-07-02 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:52.526264451 +0000 UTC m=+119.527717764" watchObservedRunningTime="2024-07-02 00:14:52.527115848 +0000 UTC m=+119.528569161" Jul 2 00:14:53.080430 containerd[1440]: time="2024-07-02T00:14:53.080382591Z" level=info msg="StopPodSandbox for \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\"" Jul 2 00:14:53.080950 containerd[1440]: time="2024-07-02T00:14:53.080499200Z" level=info msg="TearDown network for sandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" successfully" Jul 2 00:14:53.080950 containerd[1440]: time="2024-07-02T00:14:53.080510401Z" level=info msg="StopPodSandbox for \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" returns successfully" Jul 2 00:14:53.080950 containerd[1440]: time="2024-07-02T00:14:53.080913783Z" level=info msg="RemovePodSandbox for \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\"" Jul 2 00:14:53.080950 containerd[1440]: time="2024-07-02T00:14:53.080940744Z" level=info msg="Forcibly stopping sandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\"" Jul 2 00:14:53.085722 containerd[1440]: time="2024-07-02T00:14:53.081003723Z" level=info msg="TearDown network for sandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" successfully" Jul 2 00:14:53.090853 containerd[1440]: time="2024-07-02T00:14:53.090812223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:14:53.090950 containerd[1440]: time="2024-07-02T00:14:53.090876064Z" level=info msg="RemovePodSandbox \"52811350242b41bafca238d573a656e785943b8c2360c8ae6a4d0e817ae34de0\" returns successfully" Jul 2 00:14:53.091364 containerd[1440]: time="2024-07-02T00:14:53.091339498Z" level=info msg="StopPodSandbox for \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\"" Jul 2 00:14:53.091459 containerd[1440]: time="2024-07-02T00:14:53.091423297Z" level=info msg="TearDown network for sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" successfully" Jul 2 00:14:53.091459 containerd[1440]: time="2024-07-02T00:14:53.091458554Z" level=info msg="StopPodSandbox for \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" returns successfully" Jul 2 00:14:53.091707 containerd[1440]: time="2024-07-02T00:14:53.091668580Z" level=info msg="RemovePodSandbox for \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\"" Jul 2 00:14:53.091707 containerd[1440]: time="2024-07-02T00:14:53.091693426Z" level=info msg="Forcibly stopping sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\"" Jul 2 00:14:53.091789 containerd[1440]: time="2024-07-02T00:14:53.091744253Z" level=info msg="TearDown network for sandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" successfully" Jul 2 00:14:53.095770 containerd[1440]: time="2024-07-02T00:14:53.095733721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:14:53.095867 containerd[1440]: time="2024-07-02T00:14:53.095780831Z" level=info msg="RemovePodSandbox \"3b92eb3403a8973daf026ffbbaf1a4677dfe5926c18545820ab7cb97c14dc732\" returns successfully" Jul 2 00:14:53.515518 kubelet[2573]: E0702 00:14:53.515473 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:55.638538 systemd-networkd[1385]: lxc_health: Link UP Jul 2 00:14:55.647218 systemd-networkd[1385]: lxc_health: Gained carrier Jul 2 00:14:55.994277 systemd[1]: run-containerd-runc-k8s.io-b9abe111350ef3b917f3f7e7bc2f15f9e4c5afad5156bfde5a48975d20aa26a1-runc.8u3ny0.mount: Deactivated successfully. Jul 2 00:14:56.710629 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jul 2 00:14:57.438110 kubelet[2573]: E0702 00:14:57.438072 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:57.523739 kubelet[2573]: E0702 00:14:57.523686 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:04.495378 sshd[4458]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:04.500477 systemd[1]: sshd@31-10.0.0.33:22-10.0.0.1:51572.service: Deactivated successfully. Jul 2 00:15:04.503297 systemd[1]: session-32.scope: Deactivated successfully. Jul 2 00:15:04.504162 systemd-logind[1432]: Session 32 logged out. Waiting for processes to exit. Jul 2 00:15:04.505328 systemd-logind[1432]: Removed session 32.