Sep 4 17:29:35.905247 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:29:35.905275 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:35.905289 kernel: BIOS-provided physical RAM map: Sep 4 17:29:35.905298 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:29:35.905307 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 17:29:35.905315 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 17:29:35.905326 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 17:29:35.905335 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 17:29:35.905344 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 4 17:29:35.905353 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 4 17:29:35.905364 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 4 17:29:35.905373 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 4 17:29:35.905382 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 4 17:29:35.905391 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 4 17:29:35.905402 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 4 17:29:35.905415 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 17:29:35.905425 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 4 17:29:35.905435 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 4 17:29:35.905455 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 17:29:35.905464 kernel: NX (Execute Disable) protection: active Sep 4 17:29:35.905474 kernel: APIC: Static calls initialized Sep 4 17:29:35.905484 kernel: efi: EFI v2.7 by EDK II Sep 4 17:29:35.905494 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4ee018 Sep 4 17:29:35.905503 kernel: SMBIOS 2.8 present. Sep 4 17:29:35.905513 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Sep 4 17:29:35.905522 kernel: Hypervisor detected: KVM Sep 4 17:29:35.905532 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:29:35.905545 kernel: kvm-clock: using sched offset of 4162337180 cycles Sep 4 17:29:35.905555 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:29:35.905565 kernel: tsc: Detected 2794.746 MHz processor Sep 4 17:29:35.905575 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:29:35.905586 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:29:35.905595 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 4 17:29:35.905606 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:29:35.905616 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:29:35.905626 kernel: Using GB pages for direct mapping Sep 4 17:29:35.905638 kernel: Secure boot disabled Sep 4 17:29:35.905648 kernel: ACPI: Early table checksum verification disabled Sep 4 17:29:35.905658 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 17:29:35.905668 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:29:35.905683 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:35.905693 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:35.905706 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 17:29:35.905716 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:35.905727 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:35.905738 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:29:35.905748 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 17:29:35.905772 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Sep 4 17:29:35.905782 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Sep 4 17:29:35.905793 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 17:29:35.905806 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Sep 4 17:29:35.905816 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Sep 4 17:29:35.905827 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Sep 4 17:29:35.905837 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Sep 4 17:29:35.905848 kernel: No NUMA configuration found Sep 4 17:29:35.905858 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 4 17:29:35.905868 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 4 17:29:35.905879 kernel: Zone ranges: Sep 4 17:29:35.905889 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:29:35.905902 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 4 17:29:35.905913 kernel: Normal empty Sep 4 17:29:35.905923 kernel: Movable zone start for each node Sep 4 17:29:35.905933 kernel: Early memory node ranges Sep 4 17:29:35.905944 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:29:35.905954 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 17:29:35.905964 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 17:29:35.905975 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 4 17:29:35.905985 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 4 17:29:35.905995 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 4 17:29:35.906009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 4 17:29:35.906019 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:29:35.906029 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:29:35.906040 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 17:29:35.906050 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:29:35.906061 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 4 17:29:35.906071 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 17:29:35.906082 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 4 17:29:35.906092 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:29:35.906105 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:29:35.906116 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:29:35.906126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:29:35.906137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:29:35.906147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:29:35.906158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:29:35.906168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:29:35.906178 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:29:35.906189 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:29:35.906202 kernel: TSC deadline timer available Sep 4 17:29:35.906213 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:29:35.906223 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:29:35.907546 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:29:35.907557 kernel: kvm-guest: setup PV sched yield Sep 4 17:29:35.907568 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Sep 4 17:29:35.907578 kernel: Booting paravirtualized kernel on KVM Sep 4 17:29:35.907589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:29:35.907600 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:29:35.907615 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:29:35.907625 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:29:35.907635 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:29:35.907646 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:29:35.907656 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:29:35.907669 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:35.907681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:29:35.907693 kernel: random: crng init done Sep 4 17:29:35.907704 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:29:35.907717 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:29:35.907728 kernel: Fallback order for Node 0: 0 Sep 4 17:29:35.907738 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 4 17:29:35.907748 kernel: Policy zone: DMA32 Sep 4 17:29:35.907780 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:29:35.907792 kernel: Memory: 2388160K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 178580K reserved, 0K cma-reserved) Sep 4 17:29:35.907802 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:29:35.907813 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:29:35.907827 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:29:35.907837 kernel: Dynamic Preempt: voluntary Sep 4 17:29:35.907848 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:29:35.907859 kernel: rcu: RCU event tracing is enabled. Sep 4 17:29:35.907870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:29:35.907893 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:29:35.907904 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:29:35.907915 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:29:35.907926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:29:35.907937 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:29:35.907948 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:29:35.907959 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:29:35.907970 kernel: Console: colour dummy device 80x25 Sep 4 17:29:35.907983 kernel: printk: console [ttyS0] enabled Sep 4 17:29:35.907994 kernel: ACPI: Core revision 20230628 Sep 4 17:29:35.908005 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:29:35.908017 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:29:35.908030 kernel: x2apic enabled Sep 4 17:29:35.908041 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:29:35.908052 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:29:35.908063 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:29:35.908074 kernel: kvm-guest: setup PV IPIs Sep 4 17:29:35.908085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:29:35.908096 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:29:35.908108 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Sep 4 17:29:35.908119 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:29:35.908133 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:29:35.908144 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:29:35.908155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:29:35.908166 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:29:35.908177 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:29:35.908188 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:29:35.908199 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:29:35.908210 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:29:35.908221 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:29:35.908235 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:29:35.908246 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:29:35.908258 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:29:35.908269 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:29:35.908280 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:29:35.908291 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:29:35.908302 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:29:35.908313 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:29:35.908325 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:29:35.908338 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:29:35.908349 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:29:35.908360 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:29:35.908371 kernel: SELinux: Initializing. Sep 4 17:29:35.908382 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:29:35.908394 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:29:35.908405 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:29:35.908416 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:35.908430 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:35.908441 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:29:35.908462 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:29:35.908473 kernel: ... version: 0 Sep 4 17:29:35.908484 kernel: ... bit width: 48 Sep 4 17:29:35.908495 kernel: ... generic registers: 6 Sep 4 17:29:35.908506 kernel: ... value mask: 0000ffffffffffff Sep 4 17:29:35.908516 kernel: ... max period: 00007fffffffffff Sep 4 17:29:35.908527 kernel: ... fixed-purpose events: 0 Sep 4 17:29:35.908538 kernel: ... event mask: 000000000000003f Sep 4 17:29:35.908552 kernel: signal: max sigframe size: 1776 Sep 4 17:29:35.908563 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:29:35.908574 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:29:35.908585 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:29:35.908596 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:29:35.908607 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:29:35.908618 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:29:35.908629 kernel: smpboot: Max logical packages: 1 Sep 4 17:29:35.908640 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Sep 4 17:29:35.908653 kernel: devtmpfs: initialized Sep 4 17:29:35.908664 kernel: x86/mm: Memory block size: 128MB Sep 4 17:29:35.908675 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 17:29:35.908687 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 17:29:35.908698 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 4 17:29:35.908709 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 17:29:35.908720 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 17:29:35.908731 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:29:35.908743 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:29:35.908775 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:29:35.908786 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:29:35.908797 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:29:35.908809 kernel: audit: type=2000 audit(1725470975.998:1): state=initialized audit_enabled=0 res=1 Sep 4 17:29:35.908819 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:29:35.908830 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:29:35.908841 kernel: cpuidle: using governor menu Sep 4 17:29:35.908852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:29:35.908863 kernel: dca service started, version 1.12.1 Sep 4 17:29:35.908877 kernel: PCI: Using configuration type 1 for base access Sep 4 17:29:35.908888 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:29:35.908899 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:29:35.908910 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:29:35.908921 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:29:35.908932 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:29:35.908943 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:29:35.908954 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:29:35.908965 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:29:35.908979 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:29:35.908990 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:29:35.909001 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:29:35.909012 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:29:35.909023 kernel: ACPI: Interpreter enabled Sep 4 17:29:35.909034 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:29:35.909045 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:29:35.909056 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:29:35.909067 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:29:35.909081 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:29:35.909092 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:29:35.909296 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:29:35.909313 kernel: acpiphp: Slot [3] registered Sep 4 17:29:35.909324 kernel: acpiphp: Slot [4] registered Sep 4 17:29:35.909335 kernel: acpiphp: Slot [5] registered Sep 4 17:29:35.909346 kernel: acpiphp: Slot [6] registered Sep 4 17:29:35.909357 kernel: acpiphp: Slot [7] registered Sep 4 17:29:35.909371 kernel: acpiphp: Slot [8] registered Sep 4 17:29:35.909382 kernel: acpiphp: Slot [9] registered Sep 4 17:29:35.909393 kernel: acpiphp: Slot [10] registered Sep 4 17:29:35.909404 kernel: acpiphp: Slot [11] registered Sep 4 17:29:35.909414 kernel: acpiphp: Slot [12] registered Sep 4 17:29:35.909425 kernel: acpiphp: Slot [13] registered Sep 4 17:29:35.909436 kernel: acpiphp: Slot [14] registered Sep 4 17:29:35.909456 kernel: acpiphp: Slot [15] registered Sep 4 17:29:35.909467 kernel: acpiphp: Slot [16] registered Sep 4 17:29:35.909480 kernel: acpiphp: Slot [17] registered Sep 4 17:29:35.909491 kernel: acpiphp: Slot [18] registered Sep 4 17:29:35.909502 kernel: acpiphp: Slot [19] registered Sep 4 17:29:35.909513 kernel: acpiphp: Slot [20] registered Sep 4 17:29:35.909523 kernel: acpiphp: Slot [21] registered Sep 4 17:29:35.909534 kernel: acpiphp: Slot [22] registered Sep 4 17:29:35.909545 kernel: acpiphp: Slot [23] registered Sep 4 17:29:35.909556 kernel: acpiphp: Slot [24] registered Sep 4 17:29:35.909566 kernel: acpiphp: Slot [25] registered Sep 4 17:29:35.909577 kernel: acpiphp: Slot [26] registered Sep 4 17:29:35.909591 kernel: acpiphp: Slot [27] registered Sep 4 17:29:35.909601 kernel: acpiphp: Slot [28] registered Sep 4 17:29:35.909612 kernel: acpiphp: Slot [29] registered Sep 4 17:29:35.909623 kernel: acpiphp: Slot [30] registered Sep 4 17:29:35.909634 kernel: acpiphp: Slot [31] registered Sep 4 17:29:35.909645 kernel: PCI host bridge to bus 0000:00 Sep 4 17:29:35.909823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:29:35.910029 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:29:35.910195 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:29:35.910333 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:29:35.910480 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Sep 4 17:29:35.911970 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:29:35.912148 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:29:35.912299 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:29:35.912466 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:29:35.912610 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:29:35.912774 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:29:35.912949 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:29:35.913088 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:29:35.913227 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:29:35.913376 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:29:35.913534 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:29:35.913684 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:29:35.913874 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:29:35.914015 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 17:29:35.915365 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Sep 4 17:29:35.915524 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 17:29:35.915663 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Sep 4 17:29:35.915824 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:29:35.915978 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:29:35.916118 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:29:35.916258 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 17:29:35.916397 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 4 17:29:35.916558 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:29:35.916707 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:29:35.916910 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 17:29:35.917089 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 4 17:29:35.917257 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:29:35.917407 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:29:35.917570 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Sep 4 17:29:35.917720 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 4 17:29:35.917889 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 17:29:35.917909 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:29:35.917921 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:29:35.917932 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:29:35.917943 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:29:35.917954 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:29:35.917965 kernel: iommu: Default domain type: Translated Sep 4 17:29:35.917976 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:29:35.917987 kernel: efivars: Registered efivars operations Sep 4 17:29:35.917998 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:29:35.918012 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:29:35.918023 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 17:29:35.918034 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 4 17:29:35.918045 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 4 17:29:35.918056 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 4 17:29:35.918205 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:29:35.918354 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:29:35.918515 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:29:35.918535 kernel: vgaarb: loaded Sep 4 17:29:35.918546 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:29:35.918557 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:29:35.918569 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:29:35.918579 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:29:35.918591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:29:35.918601 kernel: pnp: PnP ACPI init Sep 4 17:29:35.918779 kernel: pnp 00:02: [dma 2] Sep 4 17:29:35.918795 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:29:35.918811 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:29:35.918822 kernel: NET: Registered PF_INET protocol family Sep 4 17:29:35.918833 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:29:35.918845 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:29:35.918856 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:29:35.918867 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:29:35.918878 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:29:35.918889 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:29:35.918903 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:29:35.918914 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:29:35.918925 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:29:35.918936 kernel: NET: Registered PF_XDP protocol family Sep 4 17:29:35.919089 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 17:29:35.919239 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 17:29:35.919379 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:29:35.919527 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:29:35.919670 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:29:35.919839 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:29:35.919976 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Sep 4 17:29:35.920127 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:29:35.920277 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:29:35.920292 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:29:35.920303 kernel: Initialise system trusted keyrings Sep 4 17:29:35.920314 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:29:35.920330 kernel: Key type asymmetric registered Sep 4 17:29:35.920341 kernel: Asymmetric key parser 'x509' registered Sep 4 17:29:35.920352 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:29:35.920363 kernel: io scheduler mq-deadline registered Sep 4 17:29:35.920374 kernel: io scheduler kyber registered Sep 4 17:29:35.920384 kernel: io scheduler bfq registered Sep 4 17:29:35.920395 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:29:35.920407 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:29:35.920418 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:29:35.920430 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:29:35.920454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:29:35.920466 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:29:35.920477 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:29:35.920508 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:29:35.920522 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:29:35.920679 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:29:35.920696 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:29:35.920884 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:29:35.921043 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:29:35 UTC (1725470975) Sep 4 17:29:35.921182 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:29:35.921197 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:29:35.921209 kernel: efifb: probing for efifb Sep 4 17:29:35.921221 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 4 17:29:35.921232 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 4 17:29:35.921244 kernel: efifb: scrolling: redraw Sep 4 17:29:35.921256 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 4 17:29:35.921271 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 17:29:35.921283 kernel: fb0: EFI VGA frame buffer device Sep 4 17:29:35.921294 kernel: pstore: Using crash dump compression: deflate Sep 4 17:29:35.921306 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:29:35.921318 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:29:35.921330 kernel: Segment Routing with IPv6 Sep 4 17:29:35.921341 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:29:35.921353 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:29:35.921367 kernel: Key type dns_resolver registered Sep 4 17:29:35.921381 kernel: IPI shorthand broadcast: enabled Sep 4 17:29:35.921393 kernel: sched_clock: Marking stable (671002307, 107587138)->(789764336, -11174891) Sep 4 17:29:35.921407 kernel: registered taskstats version 1 Sep 4 17:29:35.921419 kernel: Loading compiled-in X.509 certificates Sep 4 17:29:35.921431 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:29:35.921453 kernel: Key type .fscrypt registered Sep 4 17:29:35.921468 kernel: Key type fscrypt-provisioning registered Sep 4 17:29:35.921480 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:29:35.921491 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:29:35.921503 kernel: ima: No architecture policies found Sep 4 17:29:35.921514 kernel: clk: Disabling unused clocks Sep 4 17:29:35.921526 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:29:35.921538 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:29:35.921550 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:29:35.921561 kernel: Run /init as init process Sep 4 17:29:35.921575 kernel: with arguments: Sep 4 17:29:35.921587 kernel: /init Sep 4 17:29:35.921598 kernel: with environment: Sep 4 17:29:35.921609 kernel: HOME=/ Sep 4 17:29:35.921621 kernel: TERM=linux Sep 4 17:29:35.921632 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:29:35.921647 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:35.921668 systemd[1]: Detected virtualization kvm. Sep 4 17:29:35.921684 systemd[1]: Detected architecture x86-64. Sep 4 17:29:35.921699 systemd[1]: Running in initrd. Sep 4 17:29:35.921714 systemd[1]: No hostname configured, using default hostname. Sep 4 17:29:35.921728 systemd[1]: Hostname set to . Sep 4 17:29:35.921744 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:29:35.921772 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:29:35.921788 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:35.921805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:35.921818 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:29:35.921831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:35.921843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:29:35.921856 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:29:35.921870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:29:35.921883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:29:35.921898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:35.921910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:35.921922 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:35.921934 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:35.921946 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:35.921959 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:35.921971 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:35.921984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:35.921996 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:29:35.922011 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:29:35.922023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:35.922035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:35.922047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:35.922060 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:35.922072 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:29:35.922084 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:35.922096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:29:35.922111 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:29:35.922124 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:35.922136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:35.922148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:35.922160 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:35.922172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:35.922184 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:29:35.922201 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:29:35.922213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:35.922248 systemd-journald[193]: Collecting audit messages is disabled. Sep 4 17:29:35.922280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:35.922292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:29:35.922308 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:35.922320 systemd-journald[193]: Journal started Sep 4 17:29:35.922345 systemd-journald[193]: Runtime Journal (/run/log/journal/c084f7aa73b04bf2bda56e7ded9a1a8f) is 6.0M, max 48.3M, 42.3M free. Sep 4 17:29:35.889619 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:29:35.928838 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:35.928861 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:29:35.928896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:35.930041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:35.934029 kernel: Bridge firewalling registered Sep 4 17:29:35.933588 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:29:35.933831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:35.934901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:35.948980 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:29:35.951309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:35.952098 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:35.961657 dracut-cmdline[220]: dracut-dracut-053 Sep 4 17:29:35.964986 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:29:35.975652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:35.981938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:36.017072 systemd-resolved[254]: Positive Trust Anchors: Sep 4 17:29:36.017090 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:36.017126 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:36.019972 systemd-resolved[254]: Defaulting to hostname 'linux'. Sep 4 17:29:36.021057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:36.026175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:36.058780 kernel: SCSI subsystem initialized Sep 4 17:29:36.069774 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:29:36.081780 kernel: iscsi: registered transport (tcp) Sep 4 17:29:36.107166 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:29:36.107201 kernel: QLogic iSCSI HBA Driver Sep 4 17:29:36.149680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:36.169944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:29:36.198131 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:29:36.198191 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:29:36.199220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:29:36.243799 kernel: raid6: avx2x4 gen() 29460 MB/s Sep 4 17:29:36.260775 kernel: raid6: avx2x2 gen() 27826 MB/s Sep 4 17:29:36.277831 kernel: raid6: avx2x1 gen() 25527 MB/s Sep 4 17:29:36.277859 kernel: raid6: using algorithm avx2x4 gen() 29460 MB/s Sep 4 17:29:36.295832 kernel: raid6: .... xor() 7790 MB/s, rmw enabled Sep 4 17:29:36.295850 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:29:36.320776 kernel: xor: automatically using best checksumming function avx Sep 4 17:29:36.493792 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:29:36.504746 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:36.514892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:36.528196 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 4 17:29:36.532564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:36.543893 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:29:36.555730 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 4 17:29:36.584827 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:36.597888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:36.662084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:36.668953 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:29:36.681861 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:36.682699 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:36.685216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:36.692884 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:29:36.693053 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:29:36.686444 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:36.693966 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:29:36.704370 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:29:36.704388 kernel: GPT:9289727 != 19775487 Sep 4 17:29:36.704398 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:29:36.704408 kernel: GPT:9289727 != 19775487 Sep 4 17:29:36.704417 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:29:36.704436 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:36.705771 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:29:36.722211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:36.731009 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:29:36.731062 kernel: AES CTR mode by8 optimization enabled Sep 4 17:29:36.731073 kernel: libata version 3.00 loaded. Sep 4 17:29:36.733782 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:29:36.735775 kernel: scsi host0: ata_piix Sep 4 17:29:36.735987 kernel: scsi host1: ata_piix Sep 4 17:29:36.736129 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:29:36.737416 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:29:36.743277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:36.743448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:36.746323 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:36.754748 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Sep 4 17:29:36.754790 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (476) Sep 4 17:29:36.749724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:36.749907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:36.755011 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:36.762565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:36.772040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:29:36.785959 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:29:36.795212 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:29:36.796495 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:29:36.804249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:29:36.822880 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:29:36.824054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:36.824109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:36.826500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:36.832561 disk-uuid[541]: Primary Header is updated. Sep 4 17:29:36.832561 disk-uuid[541]: Secondary Entries is updated. Sep 4 17:29:36.832561 disk-uuid[541]: Secondary Header is updated. Sep 4 17:29:36.835943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:36.829617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:36.839774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:36.849293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:36.853918 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:29:36.879589 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:36.893029 kernel: ata2: found unknown device (class 0) Sep 4 17:29:36.894778 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:29:36.897827 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:29:36.952812 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:29:36.953094 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:29:36.965841 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:29:37.862403 disk-uuid[543]: The operation has completed successfully. Sep 4 17:29:37.863736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:29:37.887537 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:29:37.887653 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:29:37.908914 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:29:37.914285 sh[582]: Success Sep 4 17:29:37.927787 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:29:37.959142 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:29:37.976110 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:29:37.980522 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:29:37.990813 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:29:37.990842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:37.990853 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:29:37.993212 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:29:37.993228 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:29:37.997021 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:29:37.998066 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:29:38.008868 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:29:38.013877 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:29:38.019815 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:38.019847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:38.019861 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:29:38.022784 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:29:38.032042 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:29:38.033996 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:38.044170 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:29:38.050915 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:29:38.101064 ignition[674]: Ignition 2.18.0 Sep 4 17:29:38.101676 ignition[674]: Stage: fetch-offline Sep 4 17:29:38.101726 ignition[674]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:38.101737 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:38.102002 ignition[674]: parsed url from cmdline: "" Sep 4 17:29:38.102007 ignition[674]: no config URL provided Sep 4 17:29:38.102014 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:29:38.102026 ignition[674]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:29:38.102066 ignition[674]: op(1): [started] loading QEMU firmware config module Sep 4 17:29:38.102073 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:29:38.111275 ignition[674]: op(1): [finished] loading QEMU firmware config module Sep 4 17:29:38.131162 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:38.142899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:38.157662 ignition[674]: parsing config with SHA512: 924b5ec1b02809104a26998bc794543fab0921dfc2e40f63a59ea5a4c5d248d36b6ac8a78390d483d20359992231ae9bef477482a3bf118b933138bf463fc27f Sep 4 17:29:38.162427 unknown[674]: fetched base config from "system" Sep 4 17:29:38.162441 unknown[674]: fetched user config from "qemu" Sep 4 17:29:38.164592 ignition[674]: fetch-offline: fetch-offline passed Sep 4 17:29:38.165517 ignition[674]: Ignition finished successfully Sep 4 17:29:38.165062 systemd-networkd[773]: lo: Link UP Sep 4 17:29:38.165066 systemd-networkd[773]: lo: Gained carrier Sep 4 17:29:38.166613 systemd-networkd[773]: Enumeration completed Sep 4 17:29:38.166833 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:38.167303 systemd[1]: Reached target network.target - Network. Sep 4 17:29:38.167843 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:38.167848 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:38.168725 systemd-networkd[773]: eth0: Link UP Sep 4 17:29:38.168731 systemd-networkd[773]: eth0: Gained carrier Sep 4 17:29:38.168739 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:38.182562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:38.183153 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:29:38.188949 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:29:38.198848 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:29:38.204025 ignition[776]: Ignition 2.18.0 Sep 4 17:29:38.204035 ignition[776]: Stage: kargs Sep 4 17:29:38.204193 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:38.204204 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:38.208022 ignition[776]: kargs: kargs passed Sep 4 17:29:38.208078 ignition[776]: Ignition finished successfully Sep 4 17:29:38.212187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:29:38.224890 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:29:38.236142 ignition[786]: Ignition 2.18.0 Sep 4 17:29:38.236155 ignition[786]: Stage: disks Sep 4 17:29:38.236365 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:38.236379 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:38.239591 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:29:38.237458 ignition[786]: disks: disks passed Sep 4 17:29:38.241445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:38.237514 ignition[786]: Ignition finished successfully Sep 4 17:29:38.243376 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:29:38.243965 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:38.244320 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:38.244653 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:38.251936 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:29:38.263403 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:29:38.269723 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:29:38.282844 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:29:38.379791 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:29:38.380471 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:29:38.382089 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:38.398852 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:38.400632 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:29:38.402155 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:29:38.410177 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Sep 4 17:29:38.410198 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:38.410209 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:38.410219 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:29:38.402190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:29:38.402212 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:38.408987 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:29:38.412694 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:29:38.420776 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:29:38.422337 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:38.450582 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:29:38.454914 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:29:38.458413 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:29:38.462784 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:29:38.538804 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:38.546879 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:29:38.548742 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:29:38.555782 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:38.574294 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:29:38.576962 ignition[921]: INFO : Ignition 2.18.0 Sep 4 17:29:38.576962 ignition[921]: INFO : Stage: mount Sep 4 17:29:38.578567 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:38.578567 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:38.581335 ignition[921]: INFO : mount: mount passed Sep 4 17:29:38.582123 ignition[921]: INFO : Ignition finished successfully Sep 4 17:29:38.584708 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:29:38.592882 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:29:38.990134 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:29:38.999010 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:29:39.005781 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Sep 4 17:29:39.007850 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:29:39.007863 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:29:39.007874 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:29:39.010778 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:29:39.013122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:29:39.034524 ignition[950]: INFO : Ignition 2.18.0 Sep 4 17:29:39.034524 ignition[950]: INFO : Stage: files Sep 4 17:29:39.036291 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:39.036291 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:39.036291 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:29:39.039917 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:29:39.039917 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:29:39.039917 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:29:39.039917 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:29:39.039917 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:29:39.039904 unknown[950]: wrote ssh authorized keys file for user: core Sep 4 17:29:39.048056 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:39.048056 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:29:39.094490 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:29:39.214464 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:29:39.216705 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:29:39.216705 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 17:29:39.704382 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:29:39.771699 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:29:39.773972 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 17:29:39.778855 systemd-networkd[773]: eth0: Gained IPv6LL Sep 4 17:29:40.155874 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:29:40.479354 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:29:40.479354 ignition[950]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:29:40.483119 ignition[950]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:29:40.485339 ignition[950]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:29:40.506352 ignition[950]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:29:40.510770 ignition[950]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:29:40.512417 ignition[950]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:29:40.512417 ignition[950]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:40.512417 ignition[950]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:29:40.512417 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:40.512417 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:29:40.512417 ignition[950]: INFO : files: files passed Sep 4 17:29:40.512417 ignition[950]: INFO : Ignition finished successfully Sep 4 17:29:40.524086 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:29:40.531915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:29:40.534309 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:29:40.537019 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:29:40.537127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:29:40.549598 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:29:40.553748 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:40.553748 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:40.557212 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:29:40.559911 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:40.560375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:29:40.574880 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:29:40.599181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:29:40.599309 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:29:40.601672 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:29:40.603844 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:29:40.604912 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:29:40.605685 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:29:40.625034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:40.632938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:29:40.643456 systemd[1]: Stopped target network.target - Network. Sep 4 17:29:40.645371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:40.647836 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:40.650362 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:29:40.652276 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:29:40.653378 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:29:40.656010 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:29:40.658164 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:29:40.660176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:29:40.662528 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:29:40.665023 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:29:40.667396 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:29:40.669541 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:29:40.672185 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:29:40.674413 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:29:40.676512 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:29:40.678215 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:29:40.679300 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:29:40.681658 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:40.683928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:40.686359 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:29:40.687453 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:40.690234 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:29:40.691310 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:29:40.693643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:29:40.694772 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:29:40.697210 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:29:40.699016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:29:40.704798 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:40.707582 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:29:40.709431 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:29:40.711288 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:29:40.712182 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:29:40.714189 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:29:40.715122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:29:40.717168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:29:40.718360 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:29:40.720896 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:29:40.721898 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:29:40.739969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:29:40.741961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:29:40.743064 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:40.746469 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:29:40.748505 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:29:40.750900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:29:40.753004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:29:40.754066 ignition[1004]: INFO : Ignition 2.18.0 Sep 4 17:29:40.754066 ignition[1004]: INFO : Stage: umount Sep 4 17:29:40.754066 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:29:40.754066 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:29:40.759309 ignition[1004]: INFO : umount: umount passed Sep 4 17:29:40.759309 ignition[1004]: INFO : Ignition finished successfully Sep 4 17:29:40.754166 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:40.755739 systemd-networkd[773]: eth0: DHCPv6 lease lost Sep 4 17:29:40.760075 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:29:40.762114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:29:40.768298 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:29:40.769368 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:29:40.772727 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:29:40.773818 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:29:40.777416 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:29:40.778908 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:29:40.779931 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:29:40.783634 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:29:40.784702 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:29:40.788996 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:29:40.789062 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:40.792523 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:29:40.792581 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:29:40.794619 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:29:40.794673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:29:40.795197 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:29:40.795251 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:29:40.795706 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:29:40.795798 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:29:40.810873 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:29:40.812726 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:29:40.812812 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:29:40.813386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:40.813441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:40.813727 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:29:40.813791 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:40.814242 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:29:40.814293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:40.814688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:40.829927 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:29:40.830175 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:40.830954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:29:40.831012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:40.834100 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:29:40.834149 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:40.834595 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:29:40.834643 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:29:40.835445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:29:40.835488 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:29:40.835936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:29:40.835983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:29:40.837363 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:29:40.847543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:29:40.847595 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:40.848223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:40.848266 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:40.862440 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:29:40.862578 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:29:40.865162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:29:40.865266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:29:41.041951 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:29:41.042082 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:29:41.044160 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:29:41.044593 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:29:41.044656 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:29:41.057015 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:29:41.067456 systemd[1]: Switching root. Sep 4 17:29:41.098103 systemd-journald[193]: Journal stopped Sep 4 17:29:42.822207 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 4 17:29:42.822276 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:29:42.822290 kernel: SELinux: policy capability open_perms=1 Sep 4 17:29:42.822307 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:29:42.822318 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:29:42.822330 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:29:42.822341 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:29:42.822357 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:29:42.822372 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:29:42.822383 kernel: audit: type=1403 audit(1725470981.942:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:29:42.822396 systemd[1]: Successfully loaded SELinux policy in 62.869ms. Sep 4 17:29:42.822419 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.299ms. Sep 4 17:29:42.822432 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:29:42.822444 systemd[1]: Detected virtualization kvm. Sep 4 17:29:42.822457 systemd[1]: Detected architecture x86-64. Sep 4 17:29:42.822468 systemd[1]: Detected first boot. Sep 4 17:29:42.822480 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:29:42.822495 zram_generator::config[1049]: No configuration found. Sep 4 17:29:42.822508 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:29:42.822520 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:29:42.822532 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:29:42.822544 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:42.822556 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:29:42.822568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:29:42.822581 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:29:42.822595 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:29:42.822607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:29:42.822619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:29:42.822632 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:29:42.822643 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:29:42.822655 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:29:42.822667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:29:42.822681 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:29:42.822693 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:29:42.822707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:29:42.822720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:29:42.822732 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:29:42.822744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:29:42.822847 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:29:42.822862 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:29:42.822874 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:29:42.822889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:29:42.822901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:29:42.823952 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:29:42.823967 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:29:42.823979 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:29:42.823991 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:29:42.824009 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:29:42.824021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:29:42.824033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:29:42.824045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:29:42.824060 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:29:42.824076 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:29:42.824089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:29:42.824101 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:29:42.824113 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:42.824124 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:29:42.824136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:29:42.824148 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:29:42.824163 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:29:42.824175 systemd[1]: Reached target machines.target - Containers. Sep 4 17:29:42.824187 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:29:42.824200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:42.824212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:29:42.824224 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:29:42.824236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:42.824248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:42.824270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:42.824282 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:29:42.824294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:42.824306 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:29:42.824319 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:29:42.824331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:29:42.824343 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:29:42.824356 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:29:42.824367 kernel: fuse: init (API version 7.39) Sep 4 17:29:42.824381 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:29:42.824393 kernel: loop: module loaded Sep 4 17:29:42.824405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:29:42.824417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:29:42.824429 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:29:42.824441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:29:42.824453 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:29:42.824465 systemd[1]: Stopped verity-setup.service. Sep 4 17:29:42.824477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:42.824509 systemd-journald[1118]: Collecting audit messages is disabled. Sep 4 17:29:42.824531 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:29:42.824543 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:29:42.824557 systemd-journald[1118]: Journal started Sep 4 17:29:42.824580 systemd-journald[1118]: Runtime Journal (/run/log/journal/c084f7aa73b04bf2bda56e7ded9a1a8f) is 6.0M, max 48.3M, 42.3M free. Sep 4 17:29:42.602826 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:29:42.621341 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:29:42.621940 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:29:42.825808 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:29:42.827336 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:29:42.828457 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:29:42.829674 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:29:42.831059 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:29:42.832307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:29:42.834101 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:29:42.834277 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:29:42.835786 kernel: ACPI: bus type drm_connector registered Sep 4 17:29:42.836382 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:29:42.837901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:42.838065 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:42.839624 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:42.839811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:42.841408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:42.841566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:42.843257 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:29:42.843431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:29:42.844825 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:42.844985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:42.846481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:29:42.847921 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:29:42.849450 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:29:42.864022 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:29:42.873864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:29:42.876168 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:29:42.877323 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:29:42.877353 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:29:42.879347 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:29:42.881653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:29:42.884903 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:29:42.886015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:42.888702 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:29:42.892569 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:29:42.894215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:42.896866 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:29:42.898626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:42.902733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:42.904462 systemd-journald[1118]: Time spent on flushing to /var/log/journal/c084f7aa73b04bf2bda56e7ded9a1a8f is 17.345ms for 992 entries. Sep 4 17:29:42.904462 systemd-journald[1118]: System Journal (/var/log/journal/c084f7aa73b04bf2bda56e7ded9a1a8f) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:29:42.935611 systemd-journald[1118]: Received client request to flush runtime journal. Sep 4 17:29:42.935647 kernel: loop0: detected capacity change from 0 to 80568 Sep 4 17:29:42.935662 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:29:42.909937 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:29:42.916312 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:29:42.919713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:29:42.921194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:29:42.922468 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:29:42.924806 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:29:42.927363 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:29:42.937953 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:29:42.943429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:42.945583 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:29:42.953171 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:29:42.957780 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:29:42.958194 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:29:42.971009 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:29:42.978208 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:29:42.986780 kernel: loop1: detected capacity change from 0 to 139904 Sep 4 17:29:42.987921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:29:42.990705 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:29:42.991454 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:29:43.017451 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:29:43.017472 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:29:43.025080 kernel: loop2: detected capacity change from 0 to 210664 Sep 4 17:29:43.025685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:29:43.061793 kernel: loop3: detected capacity change from 0 to 80568 Sep 4 17:29:43.071783 kernel: loop4: detected capacity change from 0 to 139904 Sep 4 17:29:43.082793 kernel: loop5: detected capacity change from 0 to 210664 Sep 4 17:29:43.090715 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:29:43.092023 (sd-merge)[1186]: Merged extensions into '/usr'. Sep 4 17:29:43.096472 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:29:43.096569 systemd[1]: Reloading... Sep 4 17:29:43.154501 zram_generator::config[1208]: No configuration found. Sep 4 17:29:43.216341 ldconfig[1157]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:29:43.270486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:43.319133 systemd[1]: Reloading finished in 222 ms. Sep 4 17:29:43.352973 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:29:43.354503 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:29:43.370952 systemd[1]: Starting ensure-sysext.service... Sep 4 17:29:43.373031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:29:43.381499 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:29:43.381509 systemd[1]: Reloading... Sep 4 17:29:43.399734 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:29:43.400551 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:29:43.401916 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:29:43.402331 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 4 17:29:43.402428 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 4 17:29:43.412165 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:43.412181 systemd-tmpfiles[1250]: Skipping /boot Sep 4 17:29:43.424789 zram_generator::config[1274]: No configuration found. Sep 4 17:29:43.429606 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:29:43.429620 systemd-tmpfiles[1250]: Skipping /boot Sep 4 17:29:43.532458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:43.581790 systemd[1]: Reloading finished in 199 ms. Sep 4 17:29:43.606203 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:29:43.619301 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:29:43.628235 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:43.631083 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:29:43.633704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:29:43.638547 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:29:43.643068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:29:43.646919 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:29:43.650406 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:43.650562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:43.652221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:43.657055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:43.661259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:43.662603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:43.664813 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:29:43.666347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:43.667330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:43.667529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:43.672638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:43.676962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:43.677168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:43.680027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:43.682463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:43.682612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:43.683862 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:29:43.685849 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:29:43.687862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:43.688035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:43.691806 augenrules[1340]: No rules Sep 4 17:29:43.695066 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:43.695266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:43.697202 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:43.699029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:43.699196 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:43.701263 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Sep 4 17:29:43.707851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:43.708058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:43.717057 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:29:43.721858 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:29:43.724066 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:29:43.727962 systemd[1]: Finished ensure-sysext.service. Sep 4 17:29:43.731455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:43.731624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:29:43.732862 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:29:43.735205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:29:43.739902 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:29:43.744049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:29:43.746014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:29:43.750251 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:29:43.751670 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:29:43.751710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:29:43.752083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:29:43.755443 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:29:43.757589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:29:43.757831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:29:43.760882 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:29:43.761111 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:29:43.763489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:29:43.763718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:29:43.765731 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:29:43.765981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:29:43.788311 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:29:43.797948 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:29:43.800818 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:29:43.800878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:29:43.801998 systemd-resolved[1318]: Positive Trust Anchors: Sep 4 17:29:43.802251 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:29:43.802356 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:29:43.806291 systemd-resolved[1318]: Defaulting to hostname 'linux'. Sep 4 17:29:43.809778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:29:43.811291 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Sep 4 17:29:43.811410 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:29:43.830779 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1387) Sep 4 17:29:43.866134 systemd-networkd[1385]: lo: Link UP Sep 4 17:29:43.866148 systemd-networkd[1385]: lo: Gained carrier Sep 4 17:29:43.867056 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:29:43.868053 systemd-networkd[1385]: Enumeration completed Sep 4 17:29:43.868495 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:29:43.868529 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:43.868534 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:29:43.869770 systemd-networkd[1385]: eth0: Link UP Sep 4 17:29:43.869779 systemd-networkd[1385]: eth0: Gained carrier Sep 4 17:29:43.869794 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:43.869911 systemd[1]: Reached target network.target - Network. Sep 4 17:29:43.870948 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:29:43.878208 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:29:43.878983 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:29:43.882900 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Sep 4 17:29:43.883803 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:29:45.043058 systemd-timesyncd[1370]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:29:45.043158 systemd-timesyncd[1370]: Initial clock synchronization to Wed 2024-09-04 17:29:45.042889 UTC. Sep 4 17:29:45.043349 systemd-resolved[1318]: Clock change detected. Flushing caches. Sep 4 17:29:45.045392 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:29:45.049553 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:29:45.050061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:29:45.057584 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:29:45.060568 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Sep 4 17:29:45.065579 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 4 17:29:45.076290 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:29:45.097422 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:29:45.105672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:45.115297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:29:45.115546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:45.127551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:29:45.174762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:29:45.203484 kernel: kvm_amd: TSC scaling supported Sep 4 17:29:45.203534 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:29:45.203548 kernel: kvm_amd: Nested Paging enabled Sep 4 17:29:45.204492 kernel: kvm_amd: LBR virtualization supported Sep 4 17:29:45.204511 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:29:45.205516 kernel: kvm_amd: Virtual GIF supported Sep 4 17:29:45.224021 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:29:45.253728 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:29:45.267544 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:29:45.277120 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:45.309668 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:29:45.311401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:29:45.312621 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:29:45.314104 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:29:45.315525 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:29:45.317162 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:29:45.318489 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:29:45.319908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:29:45.321349 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:29:45.321392 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:29:45.322393 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:29:45.324141 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:29:45.326993 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:29:45.339569 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:29:45.341917 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:29:45.343594 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:29:45.344852 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:29:45.345943 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:29:45.346441 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:45.346471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:29:45.347543 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:29:45.349870 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:29:45.351983 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:29:45.355165 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:29:45.358205 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:29:45.359592 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:29:45.363543 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:29:45.365152 jq[1424]: false Sep 4 17:29:45.366146 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:29:45.369537 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:29:45.375720 extend-filesystems[1425]: Found loop3 Sep 4 17:29:45.375720 extend-filesystems[1425]: Found loop4 Sep 4 17:29:45.375720 extend-filesystems[1425]: Found loop5 Sep 4 17:29:45.375720 extend-filesystems[1425]: Found sr0 Sep 4 17:29:45.375720 extend-filesystems[1425]: Found vda Sep 4 17:29:45.375720 extend-filesystems[1425]: Found vda1 Sep 4 17:29:45.375720 extend-filesystems[1425]: Found vda2 Sep 4 17:29:45.375720 extend-filesystems[1425]: Found vda3 Sep 4 17:29:45.392239 extend-filesystems[1425]: Found usr Sep 4 17:29:45.392239 extend-filesystems[1425]: Found vda4 Sep 4 17:29:45.392239 extend-filesystems[1425]: Found vda6 Sep 4 17:29:45.392239 extend-filesystems[1425]: Found vda7 Sep 4 17:29:45.392239 extend-filesystems[1425]: Found vda9 Sep 4 17:29:45.392239 extend-filesystems[1425]: Checking size of /dev/vda9 Sep 4 17:29:45.406493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1375) Sep 4 17:29:45.377012 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:29:45.390822 dbus-daemon[1423]: [system] SELinux support is enabled Sep 4 17:29:45.406852 extend-filesystems[1425]: Resized partition /dev/vda9 Sep 4 17:29:45.387542 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:29:45.389953 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:29:45.391343 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:29:45.392173 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:29:45.394654 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:29:45.396338 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:29:45.399941 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:29:45.404994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:29:45.406459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:29:45.406882 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:29:45.407152 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:29:45.413530 extend-filesystems[1446]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:29:45.412139 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:29:45.414093 jq[1442]: true Sep 4 17:29:45.412433 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:29:45.420349 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:29:45.427428 jq[1448]: true Sep 4 17:29:45.441681 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:29:45.459404 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:29:45.482192 update_engine[1441]: I0904 17:29:45.467067 1441 main.cc:92] Flatcar Update Engine starting Sep 4 17:29:45.482192 update_engine[1441]: I0904 17:29:45.475893 1441 update_check_scheduler.cc:74] Next update check in 9m12s Sep 4 17:29:45.482497 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:29:45.482497 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:29:45.482497 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:29:45.476239 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:29:45.499117 tar[1447]: linux-amd64/helm Sep 4 17:29:45.499349 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Sep 4 17:29:45.477949 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:29:45.477971 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:29:45.479452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:29:45.479466 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:29:45.483645 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:29:45.484738 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:29:45.485849 systemd-logind[1437]: New seat seat0. Sep 4 17:29:45.487555 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:29:45.488976 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:29:45.490886 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:29:45.491101 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:29:45.517220 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:29:45.518699 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:29:45.522734 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:29:45.526226 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:29:45.530283 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:29:45.561001 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:29:45.569027 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:29:45.578571 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:29:45.578856 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:29:45.588724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:29:45.602427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:29:45.612847 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:29:45.619952 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:29:45.621413 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:29:45.671605 containerd[1454]: time="2024-09-04T17:29:45.668911234Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:29:45.694985 containerd[1454]: time="2024-09-04T17:29:45.694931789Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:29:45.694985 containerd[1454]: time="2024-09-04T17:29:45.694978207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697005 containerd[1454]: time="2024-09-04T17:29:45.696793813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697005 containerd[1454]: time="2024-09-04T17:29:45.696846733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697173 containerd[1454]: time="2024-09-04T17:29:45.697110287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697173 containerd[1454]: time="2024-09-04T17:29:45.697126247Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:29:45.697247 containerd[1454]: time="2024-09-04T17:29:45.697228098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697318 containerd[1454]: time="2024-09-04T17:29:45.697292639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697318 containerd[1454]: time="2024-09-04T17:29:45.697312497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697442 containerd[1454]: time="2024-09-04T17:29:45.697417233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697679 containerd[1454]: time="2024-09-04T17:29:45.697651292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697679 containerd[1454]: time="2024-09-04T17:29:45.697673344Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:29:45.697733 containerd[1454]: time="2024-09-04T17:29:45.697684725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697845 containerd[1454]: time="2024-09-04T17:29:45.697808567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:29:45.697845 containerd[1454]: time="2024-09-04T17:29:45.697837301Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:29:45.697916 containerd[1454]: time="2024-09-04T17:29:45.697899458Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:29:45.697938 containerd[1454]: time="2024-09-04T17:29:45.697915739Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:29:45.741323 containerd[1454]: time="2024-09-04T17:29:45.741188286Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:29:45.741323 containerd[1454]: time="2024-09-04T17:29:45.741244893Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:29:45.741323 containerd[1454]: time="2024-09-04T17:29:45.741260492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:29:45.741323 containerd[1454]: time="2024-09-04T17:29:45.741297842Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:29:45.741323 containerd[1454]: time="2024-09-04T17:29:45.741312970Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:29:45.741323 containerd[1454]: time="2024-09-04T17:29:45.741325324Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741338849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741546879Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741564893Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741578439Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741592585Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741606641Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741623553Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741636938Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741651175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741670 containerd[1454]: time="2024-09-04T17:29:45.741667335Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741962 containerd[1454]: time="2024-09-04T17:29:45.741681542Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741962 containerd[1454]: time="2024-09-04T17:29:45.741694556Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.741962 containerd[1454]: time="2024-09-04T17:29:45.741707160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:29:45.741962 containerd[1454]: time="2024-09-04T17:29:45.741814652Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:29:45.743608 containerd[1454]: time="2024-09-04T17:29:45.743559165Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:29:45.743668 containerd[1454]: time="2024-09-04T17:29:45.743626211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743668 containerd[1454]: time="2024-09-04T17:29:45.743644646Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:29:45.743730 containerd[1454]: time="2024-09-04T17:29:45.743676796Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:29:45.743785 containerd[1454]: time="2024-09-04T17:29:45.743771193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743845 containerd[1454]: time="2024-09-04T17:29:45.743795699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743845 containerd[1454]: time="2024-09-04T17:29:45.743813232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743845 containerd[1454]: time="2024-09-04T17:29:45.743837087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743931 containerd[1454]: time="2024-09-04T17:29:45.743853237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743931 containerd[1454]: time="2024-09-04T17:29:45.743869247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743931 containerd[1454]: time="2024-09-04T17:29:45.743883864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743931 containerd[1454]: time="2024-09-04T17:29:45.743899193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.743931 containerd[1454]: time="2024-09-04T17:29:45.743922267Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:29:45.744196 containerd[1454]: time="2024-09-04T17:29:45.744169480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744196 containerd[1454]: time="2024-09-04T17:29:45.744193866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744259 containerd[1454]: time="2024-09-04T17:29:45.744208794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744259 containerd[1454]: time="2024-09-04T17:29:45.744224664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744259 containerd[1454]: time="2024-09-04T17:29:45.744241175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744329 containerd[1454]: time="2024-09-04T17:29:45.744260070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744329 containerd[1454]: time="2024-09-04T17:29:45.744276611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744329 containerd[1454]: time="2024-09-04T17:29:45.744290688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:29:45.744973 containerd[1454]: time="2024-09-04T17:29:45.744887748Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:29:45.744973 containerd[1454]: time="2024-09-04T17:29:45.744962519Z" level=info msg="Connect containerd service" Sep 4 17:29:45.745147 containerd[1454]: time="2024-09-04T17:29:45.744989179Z" level=info msg="using legacy CRI server" Sep 4 17:29:45.745147 containerd[1454]: time="2024-09-04T17:29:45.744999237Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:29:45.745147 containerd[1454]: time="2024-09-04T17:29:45.745103623Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:29:45.745871 containerd[1454]: time="2024-09-04T17:29:45.745824556Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:29:45.745903 containerd[1454]: time="2024-09-04T17:29:45.745894457Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:29:45.745977 containerd[1454]: time="2024-09-04T17:29:45.745914685Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:29:45.745977 containerd[1454]: time="2024-09-04T17:29:45.745927239Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:29:45.745977 containerd[1454]: time="2024-09-04T17:29:45.745942718Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:29:45.746103 containerd[1454]: time="2024-09-04T17:29:45.746030232Z" level=info msg="Start subscribing containerd event" Sep 4 17:29:45.746161 containerd[1454]: time="2024-09-04T17:29:45.746132624Z" level=info msg="Start recovering state" Sep 4 17:29:45.746226 containerd[1454]: time="2024-09-04T17:29:45.746211392Z" level=info msg="Start event monitor" Sep 4 17:29:45.746249 containerd[1454]: time="2024-09-04T17:29:45.746233874Z" level=info msg="Start snapshots syncer" Sep 4 17:29:45.746391 containerd[1454]: time="2024-09-04T17:29:45.746311530Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:29:45.746391 containerd[1454]: time="2024-09-04T17:29:45.746323693Z" level=info msg="Start streaming server" Sep 4 17:29:45.746436 containerd[1454]: time="2024-09-04T17:29:45.746394045Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:29:45.746485 containerd[1454]: time="2024-09-04T17:29:45.746459848Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:29:45.746589 containerd[1454]: time="2024-09-04T17:29:45.746570355Z" level=info msg="containerd successfully booted in 0.078869s" Sep 4 17:29:45.746675 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:29:45.864876 tar[1447]: linux-amd64/LICENSE Sep 4 17:29:45.865006 tar[1447]: linux-amd64/README.md Sep 4 17:29:45.880351 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:29:46.376547 systemd-networkd[1385]: eth0: Gained IPv6LL Sep 4 17:29:46.380024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:29:46.382013 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:29:46.393649 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:29:46.395800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:46.397999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:29:46.417755 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:29:46.418156 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:29:46.419885 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:29:46.426123 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:29:47.087831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:47.089911 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:29:47.091466 systemd[1]: Startup finished in 801ms (kernel) + 6.243s (initrd) + 4.028s (userspace) = 11.074s. Sep 4 17:29:47.094036 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:47.634790 kubelet[1535]: E0904 17:29:47.634710 1535 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:47.639223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:47.639498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:47.639876 systemd[1]: kubelet.service: Consumed 1.072s CPU time. Sep 4 17:29:51.712740 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:29:51.714159 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:47390.service - OpenSSH per-connection server daemon (10.0.0.1:47390). Sep 4 17:29:51.754229 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 47390 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:51.756027 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:51.764655 systemd-logind[1437]: New session 1 of user core. Sep 4 17:29:51.766009 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:29:51.776574 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:29:51.788718 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:29:51.797872 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:29:51.800690 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:51.926115 systemd[1553]: Queued start job for default target default.target. Sep 4 17:29:51.937904 systemd[1553]: Created slice app.slice - User Application Slice. Sep 4 17:29:51.937933 systemd[1553]: Reached target paths.target - Paths. Sep 4 17:29:51.937947 systemd[1553]: Reached target timers.target - Timers. Sep 4 17:29:51.939843 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:29:51.952756 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:29:51.952916 systemd[1553]: Reached target sockets.target - Sockets. Sep 4 17:29:51.952941 systemd[1553]: Reached target basic.target - Basic System. Sep 4 17:29:51.952988 systemd[1553]: Reached target default.target - Main User Target. Sep 4 17:29:51.953025 systemd[1553]: Startup finished in 145ms. Sep 4 17:29:51.953612 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:29:51.961525 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:29:52.022458 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:47406.service - OpenSSH per-connection server daemon (10.0.0.1:47406). Sep 4 17:29:52.057053 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 47406 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:52.058637 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:52.063213 systemd-logind[1437]: New session 2 of user core. Sep 4 17:29:52.072548 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:29:52.128815 sshd[1564]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:52.139239 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:47406.service: Deactivated successfully. Sep 4 17:29:52.141079 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:29:52.142544 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:29:52.143862 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:47420.service - OpenSSH per-connection server daemon (10.0.0.1:47420). Sep 4 17:29:52.144775 systemd-logind[1437]: Removed session 2. Sep 4 17:29:52.175522 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 47420 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:52.176931 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:52.180677 systemd-logind[1437]: New session 3 of user core. Sep 4 17:29:52.190483 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:29:52.239274 sshd[1571]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:52.255774 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:47420.service: Deactivated successfully. Sep 4 17:29:52.257195 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:29:52.258519 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:29:52.259620 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:47426.service - OpenSSH per-connection server daemon (10.0.0.1:47426). Sep 4 17:29:52.260279 systemd-logind[1437]: Removed session 3. Sep 4 17:29:52.289741 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 47426 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:52.291014 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:52.294482 systemd-logind[1437]: New session 4 of user core. Sep 4 17:29:52.304491 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:29:52.359062 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:52.371079 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:47426.service: Deactivated successfully. Sep 4 17:29:52.372666 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:29:52.374040 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:29:52.381736 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:47438.service - OpenSSH per-connection server daemon (10.0.0.1:47438). Sep 4 17:29:52.382592 systemd-logind[1437]: Removed session 4. Sep 4 17:29:52.408965 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 47438 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:52.410550 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:52.414804 systemd-logind[1437]: New session 5 of user core. Sep 4 17:29:52.424545 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:29:52.482126 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:29:52.482433 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:52.504047 sudo[1589]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:52.506144 sshd[1585]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:52.516432 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:47438.service: Deactivated successfully. Sep 4 17:29:52.518419 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:29:52.520200 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:29:52.521635 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:47448.service - OpenSSH per-connection server daemon (10.0.0.1:47448). Sep 4 17:29:52.522434 systemd-logind[1437]: Removed session 5. Sep 4 17:29:52.554139 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 47448 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:52.555709 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:52.559803 systemd-logind[1437]: New session 6 of user core. Sep 4 17:29:52.569538 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:29:52.623925 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:29:52.624216 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:52.627902 sudo[1598]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:52.634108 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:29:52.634443 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:52.654600 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:52.656315 auditctl[1601]: No rules Sep 4 17:29:52.656802 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:29:52.657073 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:52.659696 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:29:52.688667 augenrules[1619]: No rules Sep 4 17:29:52.690615 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:29:52.691959 sudo[1597]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:52.693806 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:52.709230 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:47448.service: Deactivated successfully. Sep 4 17:29:52.711116 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:29:52.712585 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:29:52.727752 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:47462.service - OpenSSH per-connection server daemon (10.0.0.1:47462). Sep 4 17:29:52.728735 systemd-logind[1437]: Removed session 6. Sep 4 17:29:52.757466 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 47462 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:29:52.759229 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:52.763269 systemd-logind[1437]: New session 7 of user core. Sep 4 17:29:52.778605 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:29:52.831696 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:29:52.832037 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:29:52.934588 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:29:52.934801 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:29:53.171549 dockerd[1641]: time="2024-09-04T17:29:53.171406999Z" level=info msg="Starting up" Sep 4 17:29:53.511266 dockerd[1641]: time="2024-09-04T17:29:53.511202461Z" level=info msg="Loading containers: start." Sep 4 17:29:53.640407 kernel: Initializing XFRM netlink socket Sep 4 17:29:53.726823 systemd-networkd[1385]: docker0: Link UP Sep 4 17:29:53.752413 dockerd[1641]: time="2024-09-04T17:29:53.752355307Z" level=info msg="Loading containers: done." Sep 4 17:29:53.809872 dockerd[1641]: time="2024-09-04T17:29:53.809744944Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:29:53.810035 dockerd[1641]: time="2024-09-04T17:29:53.809957022Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:29:53.810103 dockerd[1641]: time="2024-09-04T17:29:53.810079282Z" level=info msg="Daemon has completed initialization" Sep 4 17:29:53.847607 dockerd[1641]: time="2024-09-04T17:29:53.847516921Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:29:53.847876 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:29:54.482771 containerd[1454]: time="2024-09-04T17:29:54.482734661Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:29:55.180151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556701262.mount: Deactivated successfully. Sep 4 17:29:56.395224 containerd[1454]: time="2024-09-04T17:29:56.395156363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:56.395954 containerd[1454]: time="2024-09-04T17:29:56.395861577Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=32772416" Sep 4 17:29:56.397089 containerd[1454]: time="2024-09-04T17:29:56.397054605Z" level=info msg="ImageCreate event name:\"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:56.400118 containerd[1454]: time="2024-09-04T17:29:56.400064984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:56.401185 containerd[1454]: time="2024-09-04T17:29:56.401151002Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"32769216\" in 1.918377488s" Sep 4 17:29:56.401244 containerd[1454]: time="2024-09-04T17:29:56.401190055Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\"" Sep 4 17:29:56.422346 containerd[1454]: time="2024-09-04T17:29:56.422302501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:29:57.710440 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:29:57.717816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:57.880303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:57.884816 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:57.932037 kubelet[1854]: E0904 17:29:57.931984 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:57.939080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:57.939296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:59.383287 containerd[1454]: time="2024-09-04T17:29:59.383212977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:59.384905 containerd[1454]: time="2024-09-04T17:29:59.384870518Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=29594065" Sep 4 17:29:59.386043 containerd[1454]: time="2024-09-04T17:29:59.386006459Z" level=info msg="ImageCreate event name:\"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:59.392315 containerd[1454]: time="2024-09-04T17:29:59.392280131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:59.393291 containerd[1454]: time="2024-09-04T17:29:59.393258898Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"31144011\" in 2.970917714s" Sep 4 17:29:59.393354 containerd[1454]: time="2024-09-04T17:29:59.393294715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\"" Sep 4 17:29:59.420204 containerd[1454]: time="2024-09-04T17:29:59.419834956Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:30:00.466103 containerd[1454]: time="2024-09-04T17:30:00.466054021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:00.466900 containerd[1454]: time="2024-09-04T17:30:00.466859923Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=17780233" Sep 4 17:30:00.468026 containerd[1454]: time="2024-09-04T17:30:00.468001946Z" level=info msg="ImageCreate event name:\"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:00.470697 containerd[1454]: time="2024-09-04T17:30:00.470668810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:00.471736 containerd[1454]: time="2024-09-04T17:30:00.471707209Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"19330197\" in 1.051840363s" Sep 4 17:30:00.471794 containerd[1454]: time="2024-09-04T17:30:00.471735983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\"" Sep 4 17:30:00.493081 containerd[1454]: time="2024-09-04T17:30:00.493010833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:30:01.573265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239055097.mount: Deactivated successfully. Sep 4 17:30:02.714829 containerd[1454]: time="2024-09-04T17:30:02.714747032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:02.754283 containerd[1454]: time="2024-09-04T17:30:02.754207427Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=29037161" Sep 4 17:30:02.780117 containerd[1454]: time="2024-09-04T17:30:02.780082309Z" level=info msg="ImageCreate event name:\"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:02.797979 containerd[1454]: time="2024-09-04T17:30:02.797947001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:02.798555 containerd[1454]: time="2024-09-04T17:30:02.798507272Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"29036180\" in 2.30545968s" Sep 4 17:30:02.798555 containerd[1454]: time="2024-09-04T17:30:02.798536497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\"" Sep 4 17:30:02.819210 containerd[1454]: time="2024-09-04T17:30:02.819174302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:30:03.420143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410397998.mount: Deactivated successfully. Sep 4 17:30:04.063559 containerd[1454]: time="2024-09-04T17:30:04.063497345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:04.064231 containerd[1454]: time="2024-09-04T17:30:04.064198661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:30:04.065294 containerd[1454]: time="2024-09-04T17:30:04.065265072Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:04.067852 containerd[1454]: time="2024-09-04T17:30:04.067821659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:04.069148 containerd[1454]: time="2024-09-04T17:30:04.069103936Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.249773721s" Sep 4 17:30:04.069199 containerd[1454]: time="2024-09-04T17:30:04.069143720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:30:04.090630 containerd[1454]: time="2024-09-04T17:30:04.090595703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:30:04.615021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189903139.mount: Deactivated successfully. Sep 4 17:30:04.620327 containerd[1454]: time="2024-09-04T17:30:04.620274087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:04.621061 containerd[1454]: time="2024-09-04T17:30:04.621015047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:30:04.622085 containerd[1454]: time="2024-09-04T17:30:04.622047845Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:04.624335 containerd[1454]: time="2024-09-04T17:30:04.624283851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:04.624994 containerd[1454]: time="2024-09-04T17:30:04.624954760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 534.324712ms" Sep 4 17:30:04.624994 containerd[1454]: time="2024-09-04T17:30:04.624986309Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:30:04.647545 containerd[1454]: time="2024-09-04T17:30:04.647495165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:30:05.198303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003875323.mount: Deactivated successfully. Sep 4 17:30:07.340993 containerd[1454]: time="2024-09-04T17:30:07.340906619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:07.341728 containerd[1454]: time="2024-09-04T17:30:07.341613615Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Sep 4 17:30:07.343034 containerd[1454]: time="2024-09-04T17:30:07.342983115Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:07.346581 containerd[1454]: time="2024-09-04T17:30:07.346531473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:07.347946 containerd[1454]: time="2024-09-04T17:30:07.347915530Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.700377475s" Sep 4 17:30:07.347994 containerd[1454]: time="2024-09-04T17:30:07.347945116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Sep 4 17:30:07.960543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:30:07.973569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:08.125075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:08.129647 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:08.172606 kubelet[2081]: E0904 17:30:08.172560 2081 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:08.177256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:08.177484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:09.644827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:09.658630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:09.676946 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-7.scope)... Sep 4 17:30:09.676960 systemd[1]: Reloading... Sep 4 17:30:09.759407 zram_generator::config[2136]: No configuration found. Sep 4 17:30:10.288704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:30:10.363921 systemd[1]: Reloading finished in 686 ms. Sep 4 17:30:10.417216 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:30:10.417312 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:30:10.417594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:10.419246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:10.600491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:10.607532 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:30:10.653843 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:30:10.653843 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:30:10.653843 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:30:10.655054 kubelet[2181]: I0904 17:30:10.655001 2181 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:30:10.933796 kubelet[2181]: I0904 17:30:10.933684 2181 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:30:10.933796 kubelet[2181]: I0904 17:30:10.933723 2181 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:30:10.934020 kubelet[2181]: I0904 17:30:10.933978 2181 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:30:10.952557 kubelet[2181]: I0904 17:30:10.952340 2181 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:30:10.952846 kubelet[2181]: E0904 17:30:10.952828 2181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.966594 kubelet[2181]: I0904 17:30:10.966557 2181 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:30:10.968501 kubelet[2181]: I0904 17:30:10.968450 2181 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:30:10.968662 kubelet[2181]: I0904 17:30:10.968487 2181 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:30:10.968754 kubelet[2181]: I0904 17:30:10.968670 2181 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:30:10.968754 kubelet[2181]: I0904 17:30:10.968680 2181 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:30:10.968822 kubelet[2181]: I0904 17:30:10.968808 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:30:10.969488 kubelet[2181]: I0904 17:30:10.969458 2181 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:30:10.969488 kubelet[2181]: I0904 17:30:10.969474 2181 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:30:10.969635 kubelet[2181]: I0904 17:30:10.969495 2181 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:30:10.969635 kubelet[2181]: I0904 17:30:10.969515 2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:30:10.970399 kubelet[2181]: W0904 17:30:10.969953 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.970399 kubelet[2181]: E0904 17:30:10.970018 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.970399 kubelet[2181]: W0904 17:30:10.970064 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.970399 kubelet[2181]: E0904 17:30:10.970102 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.972970 kubelet[2181]: I0904 17:30:10.972939 2181 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:30:10.974232 kubelet[2181]: I0904 17:30:10.974207 2181 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:30:10.974284 kubelet[2181]: W0904 17:30:10.974274 2181 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:30:10.975004 kubelet[2181]: I0904 17:30:10.974979 2181 server.go:1264] "Started kubelet" Sep 4 17:30:10.975081 kubelet[2181]: I0904 17:30:10.975043 2181 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:30:10.976164 kubelet[2181]: I0904 17:30:10.975437 2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:30:10.976164 kubelet[2181]: I0904 17:30:10.975804 2181 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:30:10.976164 kubelet[2181]: I0904 17:30:10.976079 2181 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:30:10.976293 kubelet[2181]: I0904 17:30:10.976233 2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:30:10.978975 kubelet[2181]: E0904 17:30:10.978956 2181 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:10.979083 kubelet[2181]: I0904 17:30:10.979072 2181 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:30:10.979293 kubelet[2181]: I0904 17:30:10.979275 2181 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:30:10.979453 kubelet[2181]: I0904 17:30:10.979437 2181 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:30:10.981106 kubelet[2181]: W0904 17:30:10.981065 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.981106 kubelet[2181]: E0904 17:30:10.981109 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:10.982027 kubelet[2181]: E0904 17:30:10.981307 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Sep 4 17:30:10.982670 kubelet[2181]: I0904 17:30:10.982341 2181 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:30:10.982756 kubelet[2181]: I0904 17:30:10.982732 2181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:30:10.983754 kubelet[2181]: E0904 17:30:10.983553 2181 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:30:10.984577 kubelet[2181]: E0904 17:30:10.984099 2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21abe1765844b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:30:10.974958667 +0000 UTC m=+0.362930344,LastTimestamp:2024-09-04 17:30:10.974958667 +0000 UTC m=+0.362930344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:30:10.985708 kubelet[2181]: I0904 17:30:10.985683 2181 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:30:11.000570 kubelet[2181]: I0904 17:30:10.999538 2181 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:30:11.000570 kubelet[2181]: I0904 17:30:10.999611 2181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:30:11.000570 kubelet[2181]: I0904 17:30:10.999630 2181 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:30:11.001387 kubelet[2181]: I0904 17:30:11.000713 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:30:11.002559 kubelet[2181]: I0904 17:30:11.002523 2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:30:11.002623 kubelet[2181]: I0904 17:30:11.002572 2181 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:30:11.002623 kubelet[2181]: I0904 17:30:11.002602 2181 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:30:11.002687 kubelet[2181]: E0904 17:30:11.002661 2181 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:30:11.003407 kubelet[2181]: W0904 17:30:11.003282 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:11.003468 kubelet[2181]: E0904 17:30:11.003417 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:11.081472 kubelet[2181]: I0904 17:30:11.081427 2181 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:11.081999 kubelet[2181]: E0904 17:30:11.081953 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 4 17:30:11.103319 kubelet[2181]: E0904 17:30:11.103254 2181 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:30:11.182342 kubelet[2181]: E0904 17:30:11.182274 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Sep 4 17:30:11.283679 kubelet[2181]: I0904 17:30:11.283640 2181 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:11.283981 kubelet[2181]: E0904 17:30:11.283945 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 4 17:30:11.304089 kubelet[2181]: E0904 17:30:11.304050 2181 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:30:11.490324 kubelet[2181]: I0904 17:30:11.490268 2181 policy_none.go:49] "None policy: Start" Sep 4 17:30:11.490942 kubelet[2181]: I0904 17:30:11.490928 2181 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:30:11.490942 kubelet[2181]: I0904 17:30:11.490950 2181 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:30:11.520846 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:30:11.541448 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:30:11.556690 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:30:11.558695 kubelet[2181]: I0904 17:30:11.558655 2181 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:30:11.559633 kubelet[2181]: I0904 17:30:11.558870 2181 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:30:11.559633 kubelet[2181]: I0904 17:30:11.559186 2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:30:11.560954 kubelet[2181]: E0904 17:30:11.560924 2181 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:30:11.583322 kubelet[2181]: E0904 17:30:11.583247 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Sep 4 17:30:11.685795 kubelet[2181]: I0904 17:30:11.685751 2181 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:11.686177 kubelet[2181]: E0904 17:30:11.686061 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 4 17:30:11.704242 kubelet[2181]: I0904 17:30:11.704154 2181 topology_manager.go:215] "Topology Admit Handler" podUID="1c585577efa1e7605e60a8330309fa2b" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:30:11.705169 kubelet[2181]: I0904 17:30:11.705144 2181 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:30:11.705778 kubelet[2181]: I0904 17:30:11.705763 2181 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:30:11.710945 systemd[1]: Created slice kubepods-burstable-pod1c585577efa1e7605e60a8330309fa2b.slice - libcontainer container kubepods-burstable-pod1c585577efa1e7605e60a8330309fa2b.slice. Sep 4 17:30:11.740045 systemd[1]: Created slice kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice - libcontainer container kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice. Sep 4 17:30:11.753975 systemd[1]: Created slice kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice - libcontainer container kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice. Sep 4 17:30:11.785811 kubelet[2181]: I0904 17:30:11.785770 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c585577efa1e7605e60a8330309fa2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1c585577efa1e7605e60a8330309fa2b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:11.785811 kubelet[2181]: I0904 17:30:11.785811 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:11.785977 kubelet[2181]: I0904 17:30:11.785841 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:30:11.785977 kubelet[2181]: I0904 17:30:11.785861 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c585577efa1e7605e60a8330309fa2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c585577efa1e7605e60a8330309fa2b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:11.785977 kubelet[2181]: I0904 17:30:11.785883 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:11.785977 kubelet[2181]: I0904 17:30:11.785905 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:11.785977 kubelet[2181]: I0904 17:30:11.785926 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:11.786108 kubelet[2181]: I0904 17:30:11.785949 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:11.786108 kubelet[2181]: I0904 17:30:11.785975 2181 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c585577efa1e7605e60a8330309fa2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c585577efa1e7605e60a8330309fa2b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:12.037942 kubelet[2181]: E0904 17:30:12.037889 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:12.038576 containerd[1454]: time="2024-09-04T17:30:12.038528940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1c585577efa1e7605e60a8330309fa2b,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:12.052762 kubelet[2181]: E0904 17:30:12.052732 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:12.053093 containerd[1454]: time="2024-09-04T17:30:12.053061119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:12.056308 kubelet[2181]: E0904 17:30:12.056275 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:12.056599 containerd[1454]: time="2024-09-04T17:30:12.056572517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:12.147786 kubelet[2181]: W0904 17:30:12.147694 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.147786 kubelet[2181]: E0904 17:30:12.147765 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.204503 kubelet[2181]: W0904 17:30:12.204455 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.204503 kubelet[2181]: E0904 17:30:12.204491 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.384729 kubelet[2181]: W0904 17:30:12.384615 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.384729 kubelet[2181]: E0904 17:30:12.384653 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.384729 kubelet[2181]: E0904 17:30:12.384614 2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Sep 4 17:30:12.418219 kubelet[2181]: W0904 17:30:12.418173 2181 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.418219 kubelet[2181]: E0904 17:30:12.418221 2181 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:12.488351 kubelet[2181]: I0904 17:30:12.488313 2181 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:12.488749 kubelet[2181]: E0904 17:30:12.488717 2181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 4 17:30:13.079245 kubelet[2181]: E0904 17:30:13.079211 2181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.130:6443: connect: connection refused Sep 4 17:30:13.271356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312686959.mount: Deactivated successfully. Sep 4 17:30:13.278131 containerd[1454]: time="2024-09-04T17:30:13.278068397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:13.279966 containerd[1454]: time="2024-09-04T17:30:13.279925231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:30:13.280906 containerd[1454]: time="2024-09-04T17:30:13.280873981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:13.281963 containerd[1454]: time="2024-09-04T17:30:13.281934301Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:13.282790 containerd[1454]: time="2024-09-04T17:30:13.282756704Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:30:13.283766 containerd[1454]: time="2024-09-04T17:30:13.283727776Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:13.284521 containerd[1454]: time="2024-09-04T17:30:13.284487612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:30:13.287601 containerd[1454]: time="2024-09-04T17:30:13.287566770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:13.290067 containerd[1454]: time="2024-09-04T17:30:13.290033408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.236884556s" Sep 4 17:30:13.291262 containerd[1454]: time="2024-09-04T17:30:13.291237017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.252606626s" Sep 4 17:30:13.294867 containerd[1454]: time="2024-09-04T17:30:13.294830399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.238197098s" Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410192588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410239406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410353480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410121615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410196135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410223346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410243163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410495968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410769491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410802994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410815627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:13.411483 containerd[1454]: time="2024-09-04T17:30:13.410824504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:13.434513 systemd[1]: Started cri-containerd-1c18859729e0f24fd050a78a920e2c28635e1b45fd010128f8386cedd6c6e5b8.scope - libcontainer container 1c18859729e0f24fd050a78a920e2c28635e1b45fd010128f8386cedd6c6e5b8. Sep 4 17:30:13.438927 systemd[1]: Started cri-containerd-de628eb1876e6bc0bb2f056ab3958eb04ea37a93997402417ca985a56bdd740f.scope - libcontainer container de628eb1876e6bc0bb2f056ab3958eb04ea37a93997402417ca985a56bdd740f. Sep 4 17:30:13.440617 systemd[1]: Started cri-containerd-ea9dfbd12d906dce51d2254ad41ffb7009bc88835afaea9c5f0c1926371e4726.scope - libcontainer container ea9dfbd12d906dce51d2254ad41ffb7009bc88835afaea9c5f0c1926371e4726. Sep 4 17:30:13.479034 containerd[1454]: time="2024-09-04T17:30:13.478761102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c18859729e0f24fd050a78a920e2c28635e1b45fd010128f8386cedd6c6e5b8\"" Sep 4 17:30:13.480566 kubelet[2181]: E0904 17:30:13.480446 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:13.481811 containerd[1454]: time="2024-09-04T17:30:13.481775629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1c585577efa1e7605e60a8330309fa2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea9dfbd12d906dce51d2254ad41ffb7009bc88835afaea9c5f0c1926371e4726\"" Sep 4 17:30:13.483296 kubelet[2181]: E0904 17:30:13.483264 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:13.484129 containerd[1454]: time="2024-09-04T17:30:13.484070765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"de628eb1876e6bc0bb2f056ab3958eb04ea37a93997402417ca985a56bdd740f\"" Sep 4 17:30:13.484678 kubelet[2181]: E0904 17:30:13.484653 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:13.486779 containerd[1454]: time="2024-09-04T17:30:13.486750895Z" level=info msg="CreateContainer within sandbox \"1c18859729e0f24fd050a78a920e2c28635e1b45fd010128f8386cedd6c6e5b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:30:13.487047 containerd[1454]: time="2024-09-04T17:30:13.487007236Z" level=info msg="CreateContainer within sandbox \"de628eb1876e6bc0bb2f056ab3958eb04ea37a93997402417ca985a56bdd740f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:30:13.488444 containerd[1454]: time="2024-09-04T17:30:13.488144389Z" level=info msg="CreateContainer within sandbox \"ea9dfbd12d906dce51d2254ad41ffb7009bc88835afaea9c5f0c1926371e4726\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:30:13.511635 containerd[1454]: time="2024-09-04T17:30:13.511593059Z" level=info msg="CreateContainer within sandbox \"1c18859729e0f24fd050a78a920e2c28635e1b45fd010128f8386cedd6c6e5b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fb49f6e3d1b7b1f66718eb988472d455ff82f3e1279d193cdf3dabb216d44f31\"" Sep 4 17:30:13.512191 containerd[1454]: time="2024-09-04T17:30:13.512167407Z" level=info msg="StartContainer for \"fb49f6e3d1b7b1f66718eb988472d455ff82f3e1279d193cdf3dabb216d44f31\"" Sep 4 17:30:13.523116 containerd[1454]: time="2024-09-04T17:30:13.523070676Z" level=info msg="CreateContainer within sandbox \"ea9dfbd12d906dce51d2254ad41ffb7009bc88835afaea9c5f0c1926371e4726\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e39cb8fa62f05b3a210b61a448fbd97adc4671a46abac25e20976632df15b87e\"" Sep 4 17:30:13.523553 containerd[1454]: time="2024-09-04T17:30:13.523532192Z" level=info msg="StartContainer for \"e39cb8fa62f05b3a210b61a448fbd97adc4671a46abac25e20976632df15b87e\"" Sep 4 17:30:13.524125 containerd[1454]: time="2024-09-04T17:30:13.524067336Z" level=info msg="CreateContainer within sandbox \"de628eb1876e6bc0bb2f056ab3958eb04ea37a93997402417ca985a56bdd740f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cbec54d34d8b3e771be7d4ecd10dd68fdd4936065e2af5430e32a003f6b64f49\"" Sep 4 17:30:13.525697 containerd[1454]: time="2024-09-04T17:30:13.524424186Z" level=info msg="StartContainer for \"cbec54d34d8b3e771be7d4ecd10dd68fdd4936065e2af5430e32a003f6b64f49\"" Sep 4 17:30:13.541892 systemd[1]: Started cri-containerd-fb49f6e3d1b7b1f66718eb988472d455ff82f3e1279d193cdf3dabb216d44f31.scope - libcontainer container fb49f6e3d1b7b1f66718eb988472d455ff82f3e1279d193cdf3dabb216d44f31. Sep 4 17:30:13.555535 systemd[1]: Started cri-containerd-cbec54d34d8b3e771be7d4ecd10dd68fdd4936065e2af5430e32a003f6b64f49.scope - libcontainer container cbec54d34d8b3e771be7d4ecd10dd68fdd4936065e2af5430e32a003f6b64f49. Sep 4 17:30:13.559461 systemd[1]: Started cri-containerd-e39cb8fa62f05b3a210b61a448fbd97adc4671a46abac25e20976632df15b87e.scope - libcontainer container e39cb8fa62f05b3a210b61a448fbd97adc4671a46abac25e20976632df15b87e. Sep 4 17:30:13.595439 containerd[1454]: time="2024-09-04T17:30:13.595331979Z" level=info msg="StartContainer for \"fb49f6e3d1b7b1f66718eb988472d455ff82f3e1279d193cdf3dabb216d44f31\" returns successfully" Sep 4 17:30:13.602626 containerd[1454]: time="2024-09-04T17:30:13.602566324Z" level=info msg="StartContainer for \"cbec54d34d8b3e771be7d4ecd10dd68fdd4936065e2af5430e32a003f6b64f49\" returns successfully" Sep 4 17:30:13.608734 containerd[1454]: time="2024-09-04T17:30:13.608695645Z" level=info msg="StartContainer for \"e39cb8fa62f05b3a210b61a448fbd97adc4671a46abac25e20976632df15b87e\" returns successfully" Sep 4 17:30:14.012049 kubelet[2181]: E0904 17:30:14.012011 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:14.016826 kubelet[2181]: E0904 17:30:14.016802 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:14.016952 kubelet[2181]: E0904 17:30:14.016932 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:14.090173 kubelet[2181]: I0904 17:30:14.090114 2181 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:14.382077 kubelet[2181]: E0904 17:30:14.381926 2181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:30:14.489395 kubelet[2181]: I0904 17:30:14.488931 2181 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:30:14.972569 kubelet[2181]: I0904 17:30:14.972537 2181 apiserver.go:52] "Watching apiserver" Sep 4 17:30:14.979584 kubelet[2181]: I0904 17:30:14.979543 2181 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:30:15.023726 kubelet[2181]: E0904 17:30:15.023687 2181 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:15.024205 kubelet[2181]: E0904 17:30:15.024178 2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:16.538938 systemd[1]: Reloading requested from client PID 2462 ('systemctl') (unit session-7.scope)... Sep 4 17:30:16.538956 systemd[1]: Reloading... Sep 4 17:30:16.614419 zram_generator::config[2500]: No configuration found. Sep 4 17:30:16.732244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:30:16.822508 systemd[1]: Reloading finished in 283 ms. Sep 4 17:30:16.869512 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:16.888015 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:30:16.888303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:16.901831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:17.056629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:17.061438 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:30:17.105202 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:30:17.105202 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:30:17.105202 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:30:17.105202 kubelet[2544]: I0904 17:30:17.105157 2544 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:30:17.110916 kubelet[2544]: I0904 17:30:17.110865 2544 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:30:17.110916 kubelet[2544]: I0904 17:30:17.110900 2544 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:30:17.111219 kubelet[2544]: I0904 17:30:17.111197 2544 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:30:17.113160 kubelet[2544]: I0904 17:30:17.113066 2544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:30:17.114576 kubelet[2544]: I0904 17:30:17.114474 2544 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:30:17.124485 kubelet[2544]: I0904 17:30:17.124448 2544 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:30:17.124764 kubelet[2544]: I0904 17:30:17.124718 2544 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:30:17.124978 kubelet[2544]: I0904 17:30:17.124750 2544 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:30:17.125105 kubelet[2544]: I0904 17:30:17.124982 2544 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:30:17.125105 kubelet[2544]: I0904 17:30:17.125007 2544 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:30:17.125105 kubelet[2544]: I0904 17:30:17.125054 2544 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:30:17.125671 kubelet[2544]: I0904 17:30:17.125202 2544 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:30:17.125671 kubelet[2544]: I0904 17:30:17.125214 2544 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:30:17.125671 kubelet[2544]: I0904 17:30:17.125240 2544 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:30:17.125671 kubelet[2544]: I0904 17:30:17.125266 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:30:17.126382 kubelet[2544]: I0904 17:30:17.125914 2544 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:30:17.126382 kubelet[2544]: I0904 17:30:17.126152 2544 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:30:17.126713 kubelet[2544]: I0904 17:30:17.126673 2544 server.go:1264] "Started kubelet" Sep 4 17:30:17.127530 kubelet[2544]: I0904 17:30:17.127096 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:30:17.127854 kubelet[2544]: I0904 17:30:17.127825 2544 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:30:17.127983 kubelet[2544]: I0904 17:30:17.127965 2544 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:30:17.129733 kubelet[2544]: I0904 17:30:17.129719 2544 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:30:17.130280 kubelet[2544]: I0904 17:30:17.128055 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:30:17.135709 kubelet[2544]: I0904 17:30:17.135254 2544 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:30:17.135709 kubelet[2544]: I0904 17:30:17.135361 2544 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:30:17.135982 kubelet[2544]: I0904 17:30:17.135970 2544 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:30:17.137166 kubelet[2544]: E0904 17:30:17.135892 2544 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:30:17.138068 kubelet[2544]: I0904 17:30:17.138053 2544 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:30:17.139095 kubelet[2544]: I0904 17:30:17.139070 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:30:17.141110 kubelet[2544]: I0904 17:30:17.141083 2544 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:30:17.146462 kubelet[2544]: I0904 17:30:17.146180 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:30:17.149737 kubelet[2544]: I0904 17:30:17.149715 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:30:17.149808 kubelet[2544]: I0904 17:30:17.149747 2544 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:30:17.149808 kubelet[2544]: I0904 17:30:17.149766 2544 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:30:17.149864 kubelet[2544]: E0904 17:30:17.149808 2544 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:30:17.172749 kubelet[2544]: I0904 17:30:17.172691 2544 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:30:17.172749 kubelet[2544]: I0904 17:30:17.172712 2544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:30:17.172749 kubelet[2544]: I0904 17:30:17.172732 2544 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:30:17.172957 kubelet[2544]: I0904 17:30:17.172906 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:30:17.172957 kubelet[2544]: I0904 17:30:17.172918 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:30:17.172957 kubelet[2544]: I0904 17:30:17.172940 2544 policy_none.go:49] "None policy: Start" Sep 4 17:30:17.173894 kubelet[2544]: I0904 17:30:17.173863 2544 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:30:17.173894 kubelet[2544]: I0904 17:30:17.173904 2544 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:30:17.174140 kubelet[2544]: I0904 17:30:17.174122 2544 state_mem.go:75] "Updated machine memory state" Sep 4 17:30:17.178310 kubelet[2544]: I0904 17:30:17.178289 2544 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:30:17.178553 kubelet[2544]: I0904 17:30:17.178523 2544 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:30:17.178624 kubelet[2544]: I0904 17:30:17.178611 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:30:17.240683 kubelet[2544]: I0904 17:30:17.240648 2544 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:17.250083 kubelet[2544]: I0904 17:30:17.250036 2544 topology_manager.go:215] "Topology Admit Handler" podUID="1c585577efa1e7605e60a8330309fa2b" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:30:17.250152 kubelet[2544]: I0904 17:30:17.250130 2544 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:30:17.250195 kubelet[2544]: I0904 17:30:17.250180 2544 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:30:17.289493 kubelet[2544]: I0904 17:30:17.289446 2544 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:30:17.289674 kubelet[2544]: I0904 17:30:17.289539 2544 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:30:17.337393 kubelet[2544]: I0904 17:30:17.337325 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:17.337393 kubelet[2544]: I0904 17:30:17.337367 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:17.337393 kubelet[2544]: I0904 17:30:17.337402 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:17.337587 kubelet[2544]: I0904 17:30:17.337421 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:17.337587 kubelet[2544]: I0904 17:30:17.337444 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:30:17.337587 kubelet[2544]: I0904 17:30:17.337458 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c585577efa1e7605e60a8330309fa2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c585577efa1e7605e60a8330309fa2b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:17.337587 kubelet[2544]: I0904 17:30:17.337470 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c585577efa1e7605e60a8330309fa2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c585577efa1e7605e60a8330309fa2b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:17.337587 kubelet[2544]: I0904 17:30:17.337484 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c585577efa1e7605e60a8330309fa2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1c585577efa1e7605e60a8330309fa2b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:17.337706 kubelet[2544]: I0904 17:30:17.337500 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:17.537033 sudo[2579]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:30:17.537311 sudo[2579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 4 17:30:17.559450 kubelet[2544]: E0904 17:30:17.559231 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:17.559450 kubelet[2544]: E0904 17:30:17.559346 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:17.559450 kubelet[2544]: E0904 17:30:17.559360 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:18.066122 sudo[2579]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:18.125499 kubelet[2544]: I0904 17:30:18.125449 2544 apiserver.go:52] "Watching apiserver" Sep 4 17:30:18.135573 kubelet[2544]: I0904 17:30:18.135502 2544 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:30:18.158340 kubelet[2544]: E0904 17:30:18.157893 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:18.158340 kubelet[2544]: E0904 17:30:18.158019 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:18.166348 kubelet[2544]: E0904 17:30:18.166300 2544 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:18.167074 kubelet[2544]: E0904 17:30:18.166707 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:18.174571 kubelet[2544]: I0904 17:30:18.174502 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.174485392 podStartE2EDuration="1.174485392s" podCreationTimestamp="2024-09-04 17:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:18.174316526 +0000 UTC m=+1.108607584" watchObservedRunningTime="2024-09-04 17:30:18.174485392 +0000 UTC m=+1.108776450" Sep 4 17:30:18.199205 kubelet[2544]: I0904 17:30:18.199036 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.199018309 podStartE2EDuration="1.199018309s" podCreationTimestamp="2024-09-04 17:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:18.198783947 +0000 UTC m=+1.133075015" watchObservedRunningTime="2024-09-04 17:30:18.199018309 +0000 UTC m=+1.133309367" Sep 4 17:30:18.223019 kubelet[2544]: I0904 17:30:18.222794 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.222772425 podStartE2EDuration="1.222772425s" podCreationTimestamp="2024-09-04 17:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:18.208938779 +0000 UTC m=+1.143229837" watchObservedRunningTime="2024-09-04 17:30:18.222772425 +0000 UTC m=+1.157063484" Sep 4 17:30:19.158859 kubelet[2544]: E0904 17:30:19.158820 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:19.159252 kubelet[2544]: E0904 17:30:19.158934 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:19.503089 sudo[1631]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:19.504977 sshd[1627]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:19.509008 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:47462.service: Deactivated successfully. Sep 4 17:30:19.510828 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:30:19.511009 systemd[1]: session-7.scope: Consumed 4.745s CPU time, 143.4M memory peak, 0B memory swap peak. Sep 4 17:30:19.511389 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:30:19.512148 systemd-logind[1437]: Removed session 7. Sep 4 17:30:25.720941 kubelet[2544]: E0904 17:30:25.720901 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:26.183865 kubelet[2544]: E0904 17:30:26.183741 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:26.360869 kubelet[2544]: E0904 17:30:26.360819 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:27.184867 kubelet[2544]: E0904 17:30:27.184837 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:28.523083 kubelet[2544]: E0904 17:30:28.523034 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:31.023856 update_engine[1441]: I0904 17:30:31.023782 1441 update_attempter.cc:509] Updating boot flags... Sep 4 17:30:31.059893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2628) Sep 4 17:30:31.118442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2629) Sep 4 17:30:31.197392 kubelet[2544]: I0904 17:30:31.197342 2544 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:30:31.197819 containerd[1454]: time="2024-09-04T17:30:31.197679353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:30:31.198064 kubelet[2544]: I0904 17:30:31.197886 2544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:30:31.356683 kubelet[2544]: I0904 17:30:31.356436 2544 topology_manager.go:215] "Topology Admit Handler" podUID="36f47c45-c348-4911-8c5b-52205f0482ed" podNamespace="kube-system" podName="kube-proxy-nbgs9" Sep 4 17:30:31.362923 kubelet[2544]: I0904 17:30:31.362604 2544 topology_manager.go:215] "Topology Admit Handler" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" podNamespace="kube-system" podName="cilium-fcpps" Sep 4 17:30:31.370804 systemd[1]: Created slice kubepods-besteffort-pod36f47c45_c348_4911_8c5b_52205f0482ed.slice - libcontainer container kubepods-besteffort-pod36f47c45_c348_4911_8c5b_52205f0482ed.slice. Sep 4 17:30:31.383894 systemd[1]: Created slice kubepods-burstable-pod3abd6f42_0e27_40ca_87ad_80c60d7b5b43.slice - libcontainer container kubepods-burstable-pod3abd6f42_0e27_40ca_87ad_80c60d7b5b43.slice. Sep 4 17:30:31.426344 kubelet[2544]: I0904 17:30:31.426286 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f47c45-c348-4911-8c5b-52205f0482ed-lib-modules\") pod \"kube-proxy-nbgs9\" (UID: \"36f47c45-c348-4911-8c5b-52205f0482ed\") " pod="kube-system/kube-proxy-nbgs9" Sep 4 17:30:31.426344 kubelet[2544]: I0904 17:30:31.426339 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cni-path\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.426344 kubelet[2544]: I0904 17:30:31.426360 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36f47c45-c348-4911-8c5b-52205f0482ed-kube-proxy\") pod \"kube-proxy-nbgs9\" (UID: \"36f47c45-c348-4911-8c5b-52205f0482ed\") " pod="kube-system/kube-proxy-nbgs9" Sep 4 17:30:31.426636 kubelet[2544]: I0904 17:30:31.426395 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-run\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.426636 kubelet[2544]: I0904 17:30:31.426413 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f47c45-c348-4911-8c5b-52205f0482ed-xtables-lock\") pod \"kube-proxy-nbgs9\" (UID: \"36f47c45-c348-4911-8c5b-52205f0482ed\") " pod="kube-system/kube-proxy-nbgs9" Sep 4 17:30:31.426636 kubelet[2544]: I0904 17:30:31.426466 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hostproc\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.426911 kubelet[2544]: I0904 17:30:31.426850 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9m9\" (UniqueName: \"kubernetes.io/projected/36f47c45-c348-4911-8c5b-52205f0482ed-kube-api-access-kh9m9\") pod \"kube-proxy-nbgs9\" (UID: \"36f47c45-c348-4911-8c5b-52205f0482ed\") " pod="kube-system/kube-proxy-nbgs9" Sep 4 17:30:31.427078 kubelet[2544]: I0904 17:30:31.426980 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-bpf-maps\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.427078 kubelet[2544]: I0904 17:30:31.427024 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-cgroup\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.527549 kubelet[2544]: I0904 17:30:31.527484 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrz9\" (UniqueName: \"kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-kube-api-access-gnrz9\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.527549 kubelet[2544]: I0904 17:30:31.527551 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-net\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.527549 kubelet[2544]: I0904 17:30:31.527576 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-kernel\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.527828 kubelet[2544]: I0904 17:30:31.527599 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hubble-tls\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.527828 kubelet[2544]: I0904 17:30:31.527804 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-xtables-lock\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.528336 kubelet[2544]: I0904 17:30:31.527878 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-config-path\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.528336 kubelet[2544]: I0904 17:30:31.527919 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-etc-cni-netd\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.528336 kubelet[2544]: I0904 17:30:31.527945 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-clustermesh-secrets\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.528336 kubelet[2544]: I0904 17:30:31.527999 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-lib-modules\") pod \"cilium-fcpps\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " pod="kube-system/cilium-fcpps" Sep 4 17:30:31.532879 kubelet[2544]: E0904 17:30:31.532843 2544 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:30:31.532879 kubelet[2544]: E0904 17:30:31.532880 2544 projected.go:200] Error preparing data for projected volume kube-api-access-kh9m9 for pod kube-system/kube-proxy-nbgs9: configmap "kube-root-ca.crt" not found Sep 4 17:30:31.533043 kubelet[2544]: E0904 17:30:31.532958 2544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36f47c45-c348-4911-8c5b-52205f0482ed-kube-api-access-kh9m9 podName:36f47c45-c348-4911-8c5b-52205f0482ed nodeName:}" failed. No retries permitted until 2024-09-04 17:30:32.03293067 +0000 UTC m=+14.967221799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kh9m9" (UniqueName: "kubernetes.io/projected/36f47c45-c348-4911-8c5b-52205f0482ed-kube-api-access-kh9m9") pod "kube-proxy-nbgs9" (UID: "36f47c45-c348-4911-8c5b-52205f0482ed") : configmap "kube-root-ca.crt" not found Sep 4 17:30:31.636212 kubelet[2544]: E0904 17:30:31.635340 2544 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:30:31.636212 kubelet[2544]: E0904 17:30:31.635392 2544 projected.go:200] Error preparing data for projected volume kube-api-access-gnrz9 for pod kube-system/cilium-fcpps: configmap "kube-root-ca.crt" not found Sep 4 17:30:31.636212 kubelet[2544]: E0904 17:30:31.635440 2544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-kube-api-access-gnrz9 podName:3abd6f42-0e27-40ca-87ad-80c60d7b5b43 nodeName:}" failed. No retries permitted until 2024-09-04 17:30:32.135423429 +0000 UTC m=+15.069714487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gnrz9" (UniqueName: "kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-kube-api-access-gnrz9") pod "cilium-fcpps" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43") : configmap "kube-root-ca.crt" not found Sep 4 17:30:32.150654 kubelet[2544]: I0904 17:30:32.150604 2544 topology_manager.go:215] "Topology Admit Handler" podUID="63718805-1681-4def-93f1-2666f88d400b" podNamespace="kube-system" podName="cilium-operator-599987898-99lpx" Sep 4 17:30:32.160737 systemd[1]: Created slice kubepods-besteffort-pod63718805_1681_4def_93f1_2666f88d400b.slice - libcontainer container kubepods-besteffort-pod63718805_1681_4def_93f1_2666f88d400b.slice. Sep 4 17:30:32.234795 kubelet[2544]: I0904 17:30:32.234730 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63718805-1681-4def-93f1-2666f88d400b-cilium-config-path\") pod \"cilium-operator-599987898-99lpx\" (UID: \"63718805-1681-4def-93f1-2666f88d400b\") " pod="kube-system/cilium-operator-599987898-99lpx" Sep 4 17:30:32.234795 kubelet[2544]: I0904 17:30:32.234769 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwc4p\" (UniqueName: \"kubernetes.io/projected/63718805-1681-4def-93f1-2666f88d400b-kube-api-access-dwc4p\") pod \"cilium-operator-599987898-99lpx\" (UID: \"63718805-1681-4def-93f1-2666f88d400b\") " pod="kube-system/cilium-operator-599987898-99lpx" Sep 4 17:30:32.280684 kubelet[2544]: E0904 17:30:32.280643 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:32.281362 containerd[1454]: time="2024-09-04T17:30:32.281319664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbgs9,Uid:36f47c45-c348-4911-8c5b-52205f0482ed,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:32.286870 kubelet[2544]: E0904 17:30:32.286836 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:32.287282 containerd[1454]: time="2024-09-04T17:30:32.287253739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcpps,Uid:3abd6f42-0e27-40ca-87ad-80c60d7b5b43,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:32.318888 containerd[1454]: time="2024-09-04T17:30:32.318700281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:32.318888 containerd[1454]: time="2024-09-04T17:30:32.318780894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:32.318888 containerd[1454]: time="2024-09-04T17:30:32.318816532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:32.318888 containerd[1454]: time="2024-09-04T17:30:32.318836970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:32.320002 containerd[1454]: time="2024-09-04T17:30:32.319540715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:32.320002 containerd[1454]: time="2024-09-04T17:30:32.319591120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:32.320002 containerd[1454]: time="2024-09-04T17:30:32.319671573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:32.320002 containerd[1454]: time="2024-09-04T17:30:32.319692142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:32.342640 systemd[1]: Started cri-containerd-ab4206d4c79f1322dbefe85d37a73310eb372a556676ca4720ada6f74e9abd66.scope - libcontainer container ab4206d4c79f1322dbefe85d37a73310eb372a556676ca4720ada6f74e9abd66. Sep 4 17:30:32.346937 systemd[1]: Started cri-containerd-379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b.scope - libcontainer container 379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b. Sep 4 17:30:32.371650 containerd[1454]: time="2024-09-04T17:30:32.371580394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbgs9,Uid:36f47c45-c348-4911-8c5b-52205f0482ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab4206d4c79f1322dbefe85d37a73310eb372a556676ca4720ada6f74e9abd66\"" Sep 4 17:30:32.372328 kubelet[2544]: E0904 17:30:32.372296 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:32.374690 containerd[1454]: time="2024-09-04T17:30:32.374640358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcpps,Uid:3abd6f42-0e27-40ca-87ad-80c60d7b5b43,Namespace:kube-system,Attempt:0,} returns sandbox id \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\"" Sep 4 17:30:32.375211 kubelet[2544]: E0904 17:30:32.375172 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:32.375906 containerd[1454]: time="2024-09-04T17:30:32.375858990Z" level=info msg="CreateContainer within sandbox \"ab4206d4c79f1322dbefe85d37a73310eb372a556676ca4720ada6f74e9abd66\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:30:32.376590 containerd[1454]: time="2024-09-04T17:30:32.376551052Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:30:32.397485 containerd[1454]: time="2024-09-04T17:30:32.397430353Z" level=info msg="CreateContainer within sandbox \"ab4206d4c79f1322dbefe85d37a73310eb372a556676ca4720ada6f74e9abd66\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8bfb6522420b7ef7699373983d2bff4c22b1892d55d98b7797a985951cb03f7\"" Sep 4 17:30:32.398181 containerd[1454]: time="2024-09-04T17:30:32.398046261Z" level=info msg="StartContainer for \"c8bfb6522420b7ef7699373983d2bff4c22b1892d55d98b7797a985951cb03f7\"" Sep 4 17:30:32.429565 systemd[1]: Started cri-containerd-c8bfb6522420b7ef7699373983d2bff4c22b1892d55d98b7797a985951cb03f7.scope - libcontainer container c8bfb6522420b7ef7699373983d2bff4c22b1892d55d98b7797a985951cb03f7. Sep 4 17:30:32.459028 containerd[1454]: time="2024-09-04T17:30:32.458954290Z" level=info msg="StartContainer for \"c8bfb6522420b7ef7699373983d2bff4c22b1892d55d98b7797a985951cb03f7\" returns successfully" Sep 4 17:30:32.464653 kubelet[2544]: E0904 17:30:32.464622 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:32.465610 containerd[1454]: time="2024-09-04T17:30:32.465576700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-99lpx,Uid:63718805-1681-4def-93f1-2666f88d400b,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:32.494019 containerd[1454]: time="2024-09-04T17:30:32.493771095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:32.494019 containerd[1454]: time="2024-09-04T17:30:32.493821360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:32.494019 containerd[1454]: time="2024-09-04T17:30:32.493840115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:32.494019 containerd[1454]: time="2024-09-04T17:30:32.493850656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:32.516545 systemd[1]: Started cri-containerd-d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354.scope - libcontainer container d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354. Sep 4 17:30:32.556608 containerd[1454]: time="2024-09-04T17:30:32.556561335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-99lpx,Uid:63718805-1681-4def-93f1-2666f88d400b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354\"" Sep 4 17:30:32.557250 kubelet[2544]: E0904 17:30:32.557227 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:33.195595 kubelet[2544]: E0904 17:30:33.195267 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:37.162913 kubelet[2544]: I0904 17:30:37.162839 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbgs9" podStartSLOduration=6.16281752 podStartE2EDuration="6.16281752s" podCreationTimestamp="2024-09-04 17:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:33.203573058 +0000 UTC m=+16.137864116" watchObservedRunningTime="2024-09-04 17:30:37.16281752 +0000 UTC m=+20.097108578" Sep 4 17:30:42.678745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762618040.mount: Deactivated successfully. Sep 4 17:30:42.886579 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:49608.service - OpenSSH per-connection server daemon (10.0.0.1:49608). Sep 4 17:30:43.893350 sshd[2923]: Accepted publickey for core from 10.0.0.1 port 49608 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:30:43.895068 sshd[2923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:43.899505 systemd-logind[1437]: New session 8 of user core. Sep 4 17:30:43.907562 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:30:44.120121 sshd[2923]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:44.125293 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:49608.service: Deactivated successfully. Sep 4 17:30:44.127689 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:30:44.128339 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:30:44.129353 systemd-logind[1437]: Removed session 8. Sep 4 17:30:46.818233 containerd[1454]: time="2024-09-04T17:30:46.818181259Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:46.818983 containerd[1454]: time="2024-09-04T17:30:46.818939459Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735263" Sep 4 17:30:46.820149 containerd[1454]: time="2024-09-04T17:30:46.820119903Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:46.821938 containerd[1454]: time="2024-09-04T17:30:46.821895089Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.445306366s" Sep 4 17:30:46.821938 containerd[1454]: time="2024-09-04T17:30:46.821936697Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 17:30:46.822804 containerd[1454]: time="2024-09-04T17:30:46.822776419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:30:46.825589 containerd[1454]: time="2024-09-04T17:30:46.825552221Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:30:46.867560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633257922.mount: Deactivated successfully. Sep 4 17:30:46.869488 containerd[1454]: time="2024-09-04T17:30:46.869451965Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\"" Sep 4 17:30:46.869913 containerd[1454]: time="2024-09-04T17:30:46.869880011Z" level=info msg="StartContainer for \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\"" Sep 4 17:30:46.899507 systemd[1]: Started cri-containerd-4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44.scope - libcontainer container 4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44. Sep 4 17:30:46.938054 systemd[1]: cri-containerd-4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44.scope: Deactivated successfully. Sep 4 17:30:47.084394 containerd[1454]: time="2024-09-04T17:30:47.084198246Z" level=info msg="StartContainer for \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\" returns successfully" Sep 4 17:30:47.325268 containerd[1454]: time="2024-09-04T17:30:47.325201139Z" level=info msg="shim disconnected" id=4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44 namespace=k8s.io Sep 4 17:30:47.325268 containerd[1454]: time="2024-09-04T17:30:47.325261171Z" level=warning msg="cleaning up after shim disconnected" id=4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44 namespace=k8s.io Sep 4 17:30:47.325268 containerd[1454]: time="2024-09-04T17:30:47.325274977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:47.805895 kubelet[2544]: E0904 17:30:47.805865 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:47.807842 containerd[1454]: time="2024-09-04T17:30:47.807807257Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:30:47.824592 containerd[1454]: time="2024-09-04T17:30:47.824542497Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\"" Sep 4 17:30:47.825110 containerd[1454]: time="2024-09-04T17:30:47.825039523Z" level=info msg="StartContainer for \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\"" Sep 4 17:30:47.859667 systemd[1]: Started cri-containerd-9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef.scope - libcontainer container 9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef. Sep 4 17:30:47.865676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44-rootfs.mount: Deactivated successfully. Sep 4 17:30:47.887667 containerd[1454]: time="2024-09-04T17:30:47.887596307Z" level=info msg="StartContainer for \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\" returns successfully" Sep 4 17:30:47.900677 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:30:47.901008 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:47.901113 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:30:47.906962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:30:47.907541 systemd[1]: cri-containerd-9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef.scope: Deactivated successfully. Sep 4 17:30:47.925386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef-rootfs.mount: Deactivated successfully. Sep 4 17:30:47.949006 containerd[1454]: time="2024-09-04T17:30:47.948925777Z" level=info msg="shim disconnected" id=9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef namespace=k8s.io Sep 4 17:30:47.949006 containerd[1454]: time="2024-09-04T17:30:47.948983186Z" level=warning msg="cleaning up after shim disconnected" id=9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef namespace=k8s.io Sep 4 17:30:47.949006 containerd[1454]: time="2024-09-04T17:30:47.948992484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:47.956967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:48.591966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186087808.mount: Deactivated successfully. Sep 4 17:30:48.809507 kubelet[2544]: E0904 17:30:48.809466 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:48.812457 containerd[1454]: time="2024-09-04T17:30:48.812422303Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:30:48.884312 containerd[1454]: time="2024-09-04T17:30:48.884194715Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\"" Sep 4 17:30:48.884847 containerd[1454]: time="2024-09-04T17:30:48.884788543Z" level=info msg="StartContainer for \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\"" Sep 4 17:30:48.903343 containerd[1454]: time="2024-09-04T17:30:48.903300212Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:48.905626 containerd[1454]: time="2024-09-04T17:30:48.905529993Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907209" Sep 4 17:30:48.906478 containerd[1454]: time="2024-09-04T17:30:48.906444835Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:48.908737 containerd[1454]: time="2024-09-04T17:30:48.908672802Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.085850276s" Sep 4 17:30:48.908800 containerd[1454]: time="2024-09-04T17:30:48.908743385Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 17:30:48.910896 systemd[1]: run-containerd-runc-k8s.io-83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236-runc.TCCXTW.mount: Deactivated successfully. Sep 4 17:30:48.913762 containerd[1454]: time="2024-09-04T17:30:48.913730609Z" level=info msg="CreateContainer within sandbox \"d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:30:48.917537 systemd[1]: Started cri-containerd-83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236.scope - libcontainer container 83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236. Sep 4 17:30:48.932113 containerd[1454]: time="2024-09-04T17:30:48.932060386Z" level=info msg="CreateContainer within sandbox \"d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\"" Sep 4 17:30:48.932822 containerd[1454]: time="2024-09-04T17:30:48.932795791Z" level=info msg="StartContainer for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\"" Sep 4 17:30:48.955499 systemd[1]: cri-containerd-83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236.scope: Deactivated successfully. Sep 4 17:30:48.957085 containerd[1454]: time="2024-09-04T17:30:48.957017616Z" level=info msg="StartContainer for \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\" returns successfully" Sep 4 17:30:48.969571 systemd[1]: Started cri-containerd-55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6.scope - libcontainer container 55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6. Sep 4 17:30:49.136452 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:35450.service - OpenSSH per-connection server daemon (10.0.0.1:35450). Sep 4 17:30:49.260550 containerd[1454]: time="2024-09-04T17:30:49.260501337Z" level=info msg="StartContainer for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" returns successfully" Sep 4 17:30:49.267494 containerd[1454]: time="2024-09-04T17:30:49.265912337Z" level=info msg="shim disconnected" id=83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236 namespace=k8s.io Sep 4 17:30:49.267494 containerd[1454]: time="2024-09-04T17:30:49.265983000Z" level=warning msg="cleaning up after shim disconnected" id=83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236 namespace=k8s.io Sep 4 17:30:49.267494 containerd[1454]: time="2024-09-04T17:30:49.265993430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:49.303264 sshd[3186]: Accepted publickey for core from 10.0.0.1 port 35450 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:30:49.308193 sshd[3186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:49.320874 systemd-logind[1437]: New session 9 of user core. Sep 4 17:30:49.327593 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:30:49.510718 sshd[3186]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:49.515335 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:35450.service: Deactivated successfully. Sep 4 17:30:49.518494 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:30:49.519282 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:30:49.520508 systemd-logind[1437]: Removed session 9. Sep 4 17:30:49.819148 kubelet[2544]: E0904 17:30:49.819001 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:49.823970 kubelet[2544]: E0904 17:30:49.823936 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:49.827025 containerd[1454]: time="2024-09-04T17:30:49.826982547Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:30:49.855461 containerd[1454]: time="2024-09-04T17:30:49.855406715Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\"" Sep 4 17:30:49.856201 containerd[1454]: time="2024-09-04T17:30:49.856157940Z" level=info msg="StartContainer for \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\"" Sep 4 17:30:49.880242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236-rootfs.mount: Deactivated successfully. Sep 4 17:30:49.914014 kubelet[2544]: I0904 17:30:49.913859 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-99lpx" podStartSLOduration=1.560579572 podStartE2EDuration="17.913839715s" podCreationTimestamp="2024-09-04 17:30:32 +0000 UTC" firstStartedPulling="2024-09-04 17:30:32.557980305 +0000 UTC m=+15.492271364" lastFinishedPulling="2024-09-04 17:30:48.911240459 +0000 UTC m=+31.845531507" observedRunningTime="2024-09-04 17:30:49.84505993 +0000 UTC m=+32.779351008" watchObservedRunningTime="2024-09-04 17:30:49.913839715 +0000 UTC m=+32.848130773" Sep 4 17:30:49.916561 systemd[1]: Started cri-containerd-12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520.scope - libcontainer container 12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520. Sep 4 17:30:49.951536 systemd[1]: cri-containerd-12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520.scope: Deactivated successfully. Sep 4 17:30:49.966619 containerd[1454]: time="2024-09-04T17:30:49.966558864Z" level=info msg="StartContainer for \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\" returns successfully" Sep 4 17:30:50.000882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520-rootfs.mount: Deactivated successfully. Sep 4 17:30:50.009590 containerd[1454]: time="2024-09-04T17:30:50.009513901Z" level=info msg="shim disconnected" id=12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520 namespace=k8s.io Sep 4 17:30:50.009590 containerd[1454]: time="2024-09-04T17:30:50.009584314Z" level=warning msg="cleaning up after shim disconnected" id=12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520 namespace=k8s.io Sep 4 17:30:50.009590 containerd[1454]: time="2024-09-04T17:30:50.009595285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:50.826994 kubelet[2544]: E0904 17:30:50.826967 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:50.826994 kubelet[2544]: E0904 17:30:50.826998 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:50.828561 containerd[1454]: time="2024-09-04T17:30:50.828531276Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:30:50.992932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233089120.mount: Deactivated successfully. Sep 4 17:30:51.101642 containerd[1454]: time="2024-09-04T17:30:51.101504196Z" level=info msg="CreateContainer within sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\"" Sep 4 17:30:51.102098 containerd[1454]: time="2024-09-04T17:30:51.102012182Z" level=info msg="StartContainer for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\"" Sep 4 17:30:51.140641 systemd[1]: Started cri-containerd-ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b.scope - libcontainer container ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b. Sep 4 17:30:51.176730 containerd[1454]: time="2024-09-04T17:30:51.176679121Z" level=info msg="StartContainer for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" returns successfully" Sep 4 17:30:51.318122 kubelet[2544]: I0904 17:30:51.318076 2544 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:30:51.336522 kubelet[2544]: I0904 17:30:51.336480 2544 topology_manager.go:215] "Topology Admit Handler" podUID="23d0ab71-67d2-487b-ad28-411820d8554d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ssdgn" Sep 4 17:30:51.339015 kubelet[2544]: I0904 17:30:51.338982 2544 topology_manager.go:215] "Topology Admit Handler" podUID="73e0d50e-c041-46cd-89ca-920698ff7134" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jnxs9" Sep 4 17:30:51.349082 systemd[1]: Created slice kubepods-burstable-pod23d0ab71_67d2_487b_ad28_411820d8554d.slice - libcontainer container kubepods-burstable-pod23d0ab71_67d2_487b_ad28_411820d8554d.slice. Sep 4 17:30:51.357137 systemd[1]: Created slice kubepods-burstable-pod73e0d50e_c041_46cd_89ca_920698ff7134.slice - libcontainer container kubepods-burstable-pod73e0d50e_c041_46cd_89ca_920698ff7134.slice. Sep 4 17:30:51.429898 kubelet[2544]: I0904 17:30:51.429752 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzz75\" (UniqueName: \"kubernetes.io/projected/23d0ab71-67d2-487b-ad28-411820d8554d-kube-api-access-rzz75\") pod \"coredns-7db6d8ff4d-ssdgn\" (UID: \"23d0ab71-67d2-487b-ad28-411820d8554d\") " pod="kube-system/coredns-7db6d8ff4d-ssdgn" Sep 4 17:30:51.429898 kubelet[2544]: I0904 17:30:51.429811 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdnrc\" (UniqueName: \"kubernetes.io/projected/73e0d50e-c041-46cd-89ca-920698ff7134-kube-api-access-qdnrc\") pod \"coredns-7db6d8ff4d-jnxs9\" (UID: \"73e0d50e-c041-46cd-89ca-920698ff7134\") " pod="kube-system/coredns-7db6d8ff4d-jnxs9" Sep 4 17:30:51.429898 kubelet[2544]: I0904 17:30:51.429831 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23d0ab71-67d2-487b-ad28-411820d8554d-config-volume\") pod \"coredns-7db6d8ff4d-ssdgn\" (UID: \"23d0ab71-67d2-487b-ad28-411820d8554d\") " pod="kube-system/coredns-7db6d8ff4d-ssdgn" Sep 4 17:30:51.429898 kubelet[2544]: I0904 17:30:51.429848 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73e0d50e-c041-46cd-89ca-920698ff7134-config-volume\") pod \"coredns-7db6d8ff4d-jnxs9\" (UID: \"73e0d50e-c041-46cd-89ca-920698ff7134\") " pod="kube-system/coredns-7db6d8ff4d-jnxs9" Sep 4 17:30:51.653162 kubelet[2544]: E0904 17:30:51.652762 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:51.653550 containerd[1454]: time="2024-09-04T17:30:51.653349104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ssdgn,Uid:23d0ab71-67d2-487b-ad28-411820d8554d,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:51.660702 kubelet[2544]: E0904 17:30:51.660651 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:51.661320 containerd[1454]: time="2024-09-04T17:30:51.661273210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnxs9,Uid:73e0d50e-c041-46cd-89ca-920698ff7134,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:51.830799 kubelet[2544]: E0904 17:30:51.830761 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:51.841393 kubelet[2544]: I0904 17:30:51.840753 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fcpps" podStartSLOduration=6.394314971 podStartE2EDuration="20.84073521s" podCreationTimestamp="2024-09-04 17:30:31 +0000 UTC" firstStartedPulling="2024-09-04 17:30:32.376184247 +0000 UTC m=+15.310475305" lastFinishedPulling="2024-09-04 17:30:46.822604486 +0000 UTC m=+29.756895544" observedRunningTime="2024-09-04 17:30:51.840087691 +0000 UTC m=+34.774378749" watchObservedRunningTime="2024-09-04 17:30:51.84073521 +0000 UTC m=+34.775026258" Sep 4 17:30:52.832152 kubelet[2544]: E0904 17:30:52.832119 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:53.418480 systemd-networkd[1385]: cilium_host: Link UP Sep 4 17:30:53.418643 systemd-networkd[1385]: cilium_net: Link UP Sep 4 17:30:53.418830 systemd-networkd[1385]: cilium_net: Gained carrier Sep 4 17:30:53.419005 systemd-networkd[1385]: cilium_host: Gained carrier Sep 4 17:30:53.504503 systemd-networkd[1385]: cilium_net: Gained IPv6LL Sep 4 17:30:53.524266 systemd-networkd[1385]: cilium_vxlan: Link UP Sep 4 17:30:53.524274 systemd-networkd[1385]: cilium_vxlan: Gained carrier Sep 4 17:30:53.735502 kernel: NET: Registered PF_ALG protocol family Sep 4 17:30:53.836881 kubelet[2544]: E0904 17:30:53.836690 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:53.900608 systemd-networkd[1385]: cilium_host: Gained IPv6LL Sep 4 17:30:54.430845 systemd-networkd[1385]: lxc_health: Link UP Sep 4 17:30:54.438974 systemd-networkd[1385]: lxc_health: Gained carrier Sep 4 17:30:54.528947 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:35460.service - OpenSSH per-connection server daemon (10.0.0.1:35460). Sep 4 17:30:54.560804 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 35460 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:30:54.562539 sshd[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:54.566428 systemd-logind[1437]: New session 10 of user core. Sep 4 17:30:54.574506 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:30:54.715890 systemd-networkd[1385]: lxca911aa5e5046: Link UP Sep 4 17:30:54.723617 sshd[3763]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:54.724410 kernel: eth0: renamed from tmp0028b Sep 4 17:30:54.731830 systemd-networkd[1385]: lxc8d1819a88694: Link UP Sep 4 17:30:54.733813 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:30:54.734812 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:35460.service: Deactivated successfully. Sep 4 17:30:54.736735 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:30:54.737787 systemd-logind[1437]: Removed session 10. Sep 4 17:30:54.741396 kernel: eth0: renamed from tmpd29cf Sep 4 17:30:54.749175 systemd-networkd[1385]: lxca911aa5e5046: Gained carrier Sep 4 17:30:54.751788 systemd-networkd[1385]: lxc8d1819a88694: Gained carrier Sep 4 17:30:54.920581 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Sep 4 17:30:56.075451 systemd-networkd[1385]: lxc_health: Gained IPv6LL Sep 4 17:30:56.264582 systemd-networkd[1385]: lxca911aa5e5046: Gained IPv6LL Sep 4 17:30:56.291238 kubelet[2544]: E0904 17:30:56.291196 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:56.648504 systemd-networkd[1385]: lxc8d1819a88694: Gained IPv6LL Sep 4 17:30:56.844400 kubelet[2544]: E0904 17:30:56.842712 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:57.843712 kubelet[2544]: E0904 17:30:57.843672 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.275195 containerd[1454]: time="2024-09-04T17:30:58.275119094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:58.275195 containerd[1454]: time="2024-09-04T17:30:58.275166593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:58.275195 containerd[1454]: time="2024-09-04T17:30:58.275186782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:58.275195 containerd[1454]: time="2024-09-04T17:30:58.275199987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:58.276134 containerd[1454]: time="2024-09-04T17:30:58.276020910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:58.276134 containerd[1454]: time="2024-09-04T17:30:58.276097273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:58.276134 containerd[1454]: time="2024-09-04T17:30:58.276132940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:58.276270 containerd[1454]: time="2024-09-04T17:30:58.276151034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:58.293228 systemd[1]: run-containerd-runc-k8s.io-d29cffbd0f0a576363c24e7fc03b30cf9dda4c78be12040570732c62cd51ab10-runc.7xvlaW.mount: Deactivated successfully. Sep 4 17:30:58.303507 systemd[1]: Started cri-containerd-d29cffbd0f0a576363c24e7fc03b30cf9dda4c78be12040570732c62cd51ab10.scope - libcontainer container d29cffbd0f0a576363c24e7fc03b30cf9dda4c78be12040570732c62cd51ab10. Sep 4 17:30:58.306967 systemd[1]: Started cri-containerd-0028b5064d9b94f1f5f29fdc5a063c18714c949bd510d89e8dd7e4a24a554308.scope - libcontainer container 0028b5064d9b94f1f5f29fdc5a063c18714c949bd510d89e8dd7e4a24a554308. Sep 4 17:30:58.315862 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:30:58.318570 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:30:58.343510 containerd[1454]: time="2024-09-04T17:30:58.342810327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnxs9,Uid:73e0d50e-c041-46cd-89ca-920698ff7134,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29cffbd0f0a576363c24e7fc03b30cf9dda4c78be12040570732c62cd51ab10\"" Sep 4 17:30:58.344149 kubelet[2544]: E0904 17:30:58.344127 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.345456 containerd[1454]: time="2024-09-04T17:30:58.345433850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ssdgn,Uid:23d0ab71-67d2-487b-ad28-411820d8554d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0028b5064d9b94f1f5f29fdc5a063c18714c949bd510d89e8dd7e4a24a554308\"" Sep 4 17:30:58.347139 containerd[1454]: time="2024-09-04T17:30:58.347118687Z" level=info msg="CreateContainer within sandbox \"d29cffbd0f0a576363c24e7fc03b30cf9dda4c78be12040570732c62cd51ab10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:30:58.347805 kubelet[2544]: E0904 17:30:58.347787 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.350202 containerd[1454]: time="2024-09-04T17:30:58.350175385Z" level=info msg="CreateContainer within sandbox \"0028b5064d9b94f1f5f29fdc5a063c18714c949bd510d89e8dd7e4a24a554308\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:30:58.375635 containerd[1454]: time="2024-09-04T17:30:58.375588146Z" level=info msg="CreateContainer within sandbox \"0028b5064d9b94f1f5f29fdc5a063c18714c949bd510d89e8dd7e4a24a554308\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67239264bc2baddf2ad51d8d8f9f73883671fb510099fbfda46241f1f704ffd6\"" Sep 4 17:30:58.376061 containerd[1454]: time="2024-09-04T17:30:58.376041970Z" level=info msg="StartContainer for \"67239264bc2baddf2ad51d8d8f9f73883671fb510099fbfda46241f1f704ffd6\"" Sep 4 17:30:58.383862 containerd[1454]: time="2024-09-04T17:30:58.383772955Z" level=info msg="CreateContainer within sandbox \"d29cffbd0f0a576363c24e7fc03b30cf9dda4c78be12040570732c62cd51ab10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a159ce383fecc2ff831333eb4ebaa7f635ebbd6f8637aed3713541f575a4ecf4\"" Sep 4 17:30:58.384470 containerd[1454]: time="2024-09-04T17:30:58.384351723Z" level=info msg="StartContainer for \"a159ce383fecc2ff831333eb4ebaa7f635ebbd6f8637aed3713541f575a4ecf4\"" Sep 4 17:30:58.404512 systemd[1]: Started cri-containerd-67239264bc2baddf2ad51d8d8f9f73883671fb510099fbfda46241f1f704ffd6.scope - libcontainer container 67239264bc2baddf2ad51d8d8f9f73883671fb510099fbfda46241f1f704ffd6. Sep 4 17:30:58.406917 systemd[1]: Started cri-containerd-a159ce383fecc2ff831333eb4ebaa7f635ebbd6f8637aed3713541f575a4ecf4.scope - libcontainer container a159ce383fecc2ff831333eb4ebaa7f635ebbd6f8637aed3713541f575a4ecf4. Sep 4 17:30:58.441104 containerd[1454]: time="2024-09-04T17:30:58.441064104Z" level=info msg="StartContainer for \"a159ce383fecc2ff831333eb4ebaa7f635ebbd6f8637aed3713541f575a4ecf4\" returns successfully" Sep 4 17:30:58.441229 containerd[1454]: time="2024-09-04T17:30:58.441140819Z" level=info msg="StartContainer for \"67239264bc2baddf2ad51d8d8f9f73883671fb510099fbfda46241f1f704ffd6\" returns successfully" Sep 4 17:30:58.846895 kubelet[2544]: E0904 17:30:58.846858 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.848624 kubelet[2544]: E0904 17:30:58.848563 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.856277 kubelet[2544]: I0904 17:30:58.856131 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jnxs9" podStartSLOduration=26.856116957 podStartE2EDuration="26.856116957s" podCreationTimestamp="2024-09-04 17:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:58.85497575 +0000 UTC m=+41.789266808" watchObservedRunningTime="2024-09-04 17:30:58.856116957 +0000 UTC m=+41.790408005" Sep 4 17:30:58.885776 kubelet[2544]: I0904 17:30:58.884318 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ssdgn" podStartSLOduration=26.884295688 podStartE2EDuration="26.884295688s" podCreationTimestamp="2024-09-04 17:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:58.863334637 +0000 UTC m=+41.797625695" watchObservedRunningTime="2024-09-04 17:30:58.884295688 +0000 UTC m=+41.818586746" Sep 4 17:30:59.735182 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:56458.service - OpenSSH per-connection server daemon (10.0.0.1:56458). Sep 4 17:30:59.772996 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 56458 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:30:59.775123 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:59.780449 systemd-logind[1437]: New session 11 of user core. Sep 4 17:30:59.791629 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:30:59.850658 kubelet[2544]: E0904 17:30:59.850613 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:59.851075 kubelet[2544]: E0904 17:30:59.851049 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:59.931567 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:59.939444 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:56458.service: Deactivated successfully. Sep 4 17:30:59.941280 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:30:59.942744 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:30:59.948676 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:56460.service - OpenSSH per-connection server daemon (10.0.0.1:56460). Sep 4 17:30:59.949767 systemd-logind[1437]: Removed session 11. Sep 4 17:30:59.976286 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 56460 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:30:59.977799 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:59.981912 systemd-logind[1437]: New session 12 of user core. Sep 4 17:30:59.989496 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:31:00.155690 sshd[4000]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:00.167622 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:56460.service: Deactivated successfully. Sep 4 17:31:00.173043 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:31:00.175561 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:31:00.181818 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:56474.service - OpenSSH per-connection server daemon (10.0.0.1:56474). Sep 4 17:31:00.182727 systemd-logind[1437]: Removed session 12. Sep 4 17:31:00.209672 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 56474 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:00.211293 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:00.215446 systemd-logind[1437]: New session 13 of user core. Sep 4 17:31:00.229523 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:31:00.336007 sshd[4012]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:00.339797 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:56474.service: Deactivated successfully. Sep 4 17:31:00.341562 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:31:00.342124 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:31:00.343002 systemd-logind[1437]: Removed session 13. Sep 4 17:31:00.852593 kubelet[2544]: E0904 17:31:00.852553 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:00.853024 kubelet[2544]: E0904 17:31:00.852845 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:05.348640 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:56490.service - OpenSSH per-connection server daemon (10.0.0.1:56490). Sep 4 17:31:05.380198 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 56490 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:05.381790 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:05.385228 systemd-logind[1437]: New session 14 of user core. Sep 4 17:31:05.395512 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:31:05.493130 sshd[4028]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:05.496703 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:56490.service: Deactivated successfully. Sep 4 17:31:05.498554 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:31:05.499170 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:31:05.500069 systemd-logind[1437]: Removed session 14. Sep 4 17:31:10.508725 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:49298.service - OpenSSH per-connection server daemon (10.0.0.1:49298). Sep 4 17:31:10.542679 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 49298 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:10.544474 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:10.548641 systemd-logind[1437]: New session 15 of user core. Sep 4 17:31:10.558517 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:31:10.667993 sshd[4042]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:10.683089 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:49298.service: Deactivated successfully. Sep 4 17:31:10.685970 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:31:10.688125 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:31:10.698931 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:49308.service - OpenSSH per-connection server daemon (10.0.0.1:49308). Sep 4 17:31:10.699911 systemd-logind[1437]: Removed session 15. Sep 4 17:31:10.727676 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 49308 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:10.729652 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:10.734380 systemd-logind[1437]: New session 16 of user core. Sep 4 17:31:10.743583 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:31:10.973538 sshd[4056]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:10.990272 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:49308.service: Deactivated successfully. Sep 4 17:31:10.993117 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:31:10.995416 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:31:11.003831 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:49324.service - OpenSSH per-connection server daemon (10.0.0.1:49324). Sep 4 17:31:11.004968 systemd-logind[1437]: Removed session 16. Sep 4 17:31:11.037077 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 49324 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:11.038969 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:11.043838 systemd-logind[1437]: New session 17 of user core. Sep 4 17:31:11.049653 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:31:12.587133 sshd[4069]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:12.597621 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:49324.service: Deactivated successfully. Sep 4 17:31:12.599980 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:31:12.604152 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:31:12.611721 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:49334.service - OpenSSH per-connection server daemon (10.0.0.1:49334). Sep 4 17:31:12.612486 systemd-logind[1437]: Removed session 17. Sep 4 17:31:12.639398 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 49334 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:12.641117 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:12.645071 systemd-logind[1437]: New session 18 of user core. Sep 4 17:31:12.660536 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:31:12.897199 sshd[4089]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:12.904763 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:49334.service: Deactivated successfully. Sep 4 17:31:12.907000 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:31:12.908734 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:31:12.917632 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:49340.service - OpenSSH per-connection server daemon (10.0.0.1:49340). Sep 4 17:31:12.918598 systemd-logind[1437]: Removed session 18. Sep 4 17:31:12.944797 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 49340 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:12.946389 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:12.950301 systemd-logind[1437]: New session 19 of user core. Sep 4 17:31:12.957500 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:31:13.063167 sshd[4102]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:13.067189 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:49340.service: Deactivated successfully. Sep 4 17:31:13.069077 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:31:13.069730 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:31:13.070758 systemd-logind[1437]: Removed session 19. Sep 4 17:31:18.074305 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:45620.service - OpenSSH per-connection server daemon (10.0.0.1:45620). Sep 4 17:31:18.105610 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 45620 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:18.107084 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:18.111619 systemd-logind[1437]: New session 20 of user core. Sep 4 17:31:18.116502 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:31:18.220749 sshd[4118]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:18.224336 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:45620.service: Deactivated successfully. Sep 4 17:31:18.226302 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:31:18.226923 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:31:18.227700 systemd-logind[1437]: Removed session 20. Sep 4 17:31:23.239571 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:45624.service - OpenSSH per-connection server daemon (10.0.0.1:45624). Sep 4 17:31:23.277037 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 45624 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:23.279072 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:23.284019 systemd-logind[1437]: New session 21 of user core. Sep 4 17:31:23.291646 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:31:23.424992 sshd[4136]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:23.430722 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:45624.service: Deactivated successfully. Sep 4 17:31:23.433869 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:31:23.434720 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:31:23.435943 systemd-logind[1437]: Removed session 21. Sep 4 17:31:28.437706 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:33862.service - OpenSSH per-connection server daemon (10.0.0.1:33862). Sep 4 17:31:28.470971 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 33862 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:28.472629 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:28.477045 systemd-logind[1437]: New session 22 of user core. Sep 4 17:31:28.490533 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:31:28.596226 sshd[4150]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:28.600508 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:33862.service: Deactivated successfully. Sep 4 17:31:28.602601 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:31:28.603179 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:31:28.604016 systemd-logind[1437]: Removed session 22. Sep 4 17:31:33.608631 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:33866.service - OpenSSH per-connection server daemon (10.0.0.1:33866). Sep 4 17:31:33.640201 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 33866 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:33.641776 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:33.645447 systemd-logind[1437]: New session 23 of user core. Sep 4 17:31:33.653503 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:31:33.757647 sshd[4166]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:33.770260 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:33866.service: Deactivated successfully. Sep 4 17:31:33.772298 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:31:33.773941 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:31:33.775398 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:33870.service - OpenSSH per-connection server daemon (10.0.0.1:33870). Sep 4 17:31:33.776233 systemd-logind[1437]: Removed session 23. Sep 4 17:31:33.815380 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 33870 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:33.816790 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:33.820802 systemd-logind[1437]: New session 24 of user core. Sep 4 17:31:33.831486 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:31:35.167774 containerd[1454]: time="2024-09-04T17:31:35.167709088Z" level=info msg="StopContainer for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" with timeout 30 (s)" Sep 4 17:31:35.184541 containerd[1454]: time="2024-09-04T17:31:35.184502468Z" level=info msg="Stop container \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" with signal terminated" Sep 4 17:31:35.203610 systemd[1]: cri-containerd-55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6.scope: Deactivated successfully. Sep 4 17:31:35.226180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6-rootfs.mount: Deactivated successfully. Sep 4 17:31:35.254820 containerd[1454]: time="2024-09-04T17:31:35.254754958Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:31:35.260038 containerd[1454]: time="2024-09-04T17:31:35.259999663Z" level=info msg="StopContainer for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" with timeout 2 (s)" Sep 4 17:31:35.260239 containerd[1454]: time="2024-09-04T17:31:35.260217792Z" level=info msg="Stop container \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" with signal terminated" Sep 4 17:31:35.263476 containerd[1454]: time="2024-09-04T17:31:35.263363169Z" level=info msg="shim disconnected" id=55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6 namespace=k8s.io Sep 4 17:31:35.263476 containerd[1454]: time="2024-09-04T17:31:35.263469693Z" level=warning msg="cleaning up after shim disconnected" id=55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6 namespace=k8s.io Sep 4 17:31:35.263476 containerd[1454]: time="2024-09-04T17:31:35.263479351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:35.270117 systemd-networkd[1385]: lxc_health: Link DOWN Sep 4 17:31:35.270580 systemd-networkd[1385]: lxc_health: Lost carrier Sep 4 17:31:35.325417 systemd[1]: cri-containerd-ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b.scope: Deactivated successfully. Sep 4 17:31:35.325927 systemd[1]: cri-containerd-ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b.scope: Consumed 7.103s CPU time. Sep 4 17:31:35.329038 containerd[1454]: time="2024-09-04T17:31:35.328898896Z" level=info msg="StopContainer for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" returns successfully" Sep 4 17:31:35.335222 containerd[1454]: time="2024-09-04T17:31:35.335162226Z" level=info msg="StopPodSandbox for \"d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354\"" Sep 4 17:31:35.346944 containerd[1454]: time="2024-09-04T17:31:35.335234415Z" level=info msg="Container to stop \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:35.350227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b-rootfs.mount: Deactivated successfully. Sep 4 17:31:35.350446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354-shm.mount: Deactivated successfully. Sep 4 17:31:35.353706 systemd[1]: cri-containerd-d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354.scope: Deactivated successfully. Sep 4 17:31:35.373840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354-rootfs.mount: Deactivated successfully. Sep 4 17:31:35.438072 containerd[1454]: time="2024-09-04T17:31:35.437894230Z" level=info msg="shim disconnected" id=ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b namespace=k8s.io Sep 4 17:31:35.438072 containerd[1454]: time="2024-09-04T17:31:35.437970598Z" level=warning msg="cleaning up after shim disconnected" id=ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b namespace=k8s.io Sep 4 17:31:35.438072 containerd[1454]: time="2024-09-04T17:31:35.437982139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:35.438457 containerd[1454]: time="2024-09-04T17:31:35.438132388Z" level=info msg="shim disconnected" id=d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354 namespace=k8s.io Sep 4 17:31:35.438457 containerd[1454]: time="2024-09-04T17:31:35.438170130Z" level=warning msg="cleaning up after shim disconnected" id=d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354 namespace=k8s.io Sep 4 17:31:35.438457 containerd[1454]: time="2024-09-04T17:31:35.438181001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:35.457285 containerd[1454]: time="2024-09-04T17:31:35.456944132Z" level=info msg="TearDown network for sandbox \"d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354\" successfully" Sep 4 17:31:35.457285 containerd[1454]: time="2024-09-04T17:31:35.457008706Z" level=info msg="StopPodSandbox for \"d13a00a2007153543a2adc16fde00319ce86fde62990e3386f822bef4e53d354\" returns successfully" Sep 4 17:31:35.463619 containerd[1454]: time="2024-09-04T17:31:35.463088443Z" level=info msg="StopContainer for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" returns successfully" Sep 4 17:31:35.464276 containerd[1454]: time="2024-09-04T17:31:35.464245625Z" level=info msg="StopPodSandbox for \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\"" Sep 4 17:31:35.464355 containerd[1454]: time="2024-09-04T17:31:35.464296572Z" level=info msg="Container to stop \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:35.464355 containerd[1454]: time="2024-09-04T17:31:35.464345877Z" level=info msg="Container to stop \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:35.464597 containerd[1454]: time="2024-09-04T17:31:35.464359804Z" level=info msg="Container to stop \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:35.464597 containerd[1454]: time="2024-09-04T17:31:35.464405512Z" level=info msg="Container to stop \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:35.464597 containerd[1454]: time="2024-09-04T17:31:35.464417846Z" level=info msg="Container to stop \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:31:35.467256 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b-shm.mount: Deactivated successfully. Sep 4 17:31:35.475760 systemd[1]: cri-containerd-379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b.scope: Deactivated successfully. Sep 4 17:31:35.481281 kubelet[2544]: I0904 17:31:35.481236 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63718805-1681-4def-93f1-2666f88d400b-cilium-config-path\") pod \"63718805-1681-4def-93f1-2666f88d400b\" (UID: \"63718805-1681-4def-93f1-2666f88d400b\") " Sep 4 17:31:35.481281 kubelet[2544]: I0904 17:31:35.481292 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwc4p\" (UniqueName: \"kubernetes.io/projected/63718805-1681-4def-93f1-2666f88d400b-kube-api-access-dwc4p\") pod \"63718805-1681-4def-93f1-2666f88d400b\" (UID: \"63718805-1681-4def-93f1-2666f88d400b\") " Sep 4 17:31:35.486098 kubelet[2544]: I0904 17:31:35.486056 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63718805-1681-4def-93f1-2666f88d400b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "63718805-1681-4def-93f1-2666f88d400b" (UID: "63718805-1681-4def-93f1-2666f88d400b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:31:35.490545 kubelet[2544]: I0904 17:31:35.490246 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63718805-1681-4def-93f1-2666f88d400b-kube-api-access-dwc4p" (OuterVolumeSpecName: "kube-api-access-dwc4p") pod "63718805-1681-4def-93f1-2666f88d400b" (UID: "63718805-1681-4def-93f1-2666f88d400b"). InnerVolumeSpecName "kube-api-access-dwc4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:31:35.582510 kubelet[2544]: I0904 17:31:35.582438 2544 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63718805-1681-4def-93f1-2666f88d400b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.582510 kubelet[2544]: I0904 17:31:35.582483 2544 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dwc4p\" (UniqueName: \"kubernetes.io/projected/63718805-1681-4def-93f1-2666f88d400b-kube-api-access-dwc4p\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.585210 containerd[1454]: time="2024-09-04T17:31:35.585130529Z" level=info msg="shim disconnected" id=379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b namespace=k8s.io Sep 4 17:31:35.585210 containerd[1454]: time="2024-09-04T17:31:35.585201836Z" level=warning msg="cleaning up after shim disconnected" id=379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b namespace=k8s.io Sep 4 17:31:35.585210 containerd[1454]: time="2024-09-04T17:31:35.585211464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:35.600135 containerd[1454]: time="2024-09-04T17:31:35.600083547Z" level=info msg="TearDown network for sandbox \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" successfully" Sep 4 17:31:35.600135 containerd[1454]: time="2024-09-04T17:31:35.600124525Z" level=info msg="StopPodSandbox for \"379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b\" returns successfully" Sep 4 17:31:35.683062 kubelet[2544]: I0904 17:31:35.682988 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnrz9\" (UniqueName: \"kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-kube-api-access-gnrz9\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683062 kubelet[2544]: I0904 17:31:35.683043 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-run\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683062 kubelet[2544]: I0904 17:31:35.683066 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-etc-cni-netd\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683307 kubelet[2544]: I0904 17:31:35.683088 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-clustermesh-secrets\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683307 kubelet[2544]: I0904 17:31:35.683109 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-xtables-lock\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683307 kubelet[2544]: I0904 17:31:35.683128 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hostproc\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683307 kubelet[2544]: I0904 17:31:35.683137 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683307 kubelet[2544]: I0904 17:31:35.683168 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683504 kubelet[2544]: I0904 17:31:35.683148 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-net\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683504 kubelet[2544]: I0904 17:31:35.683208 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-lib-modules\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683504 kubelet[2544]: I0904 17:31:35.683228 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-kernel\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683504 kubelet[2544]: I0904 17:31:35.683247 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hubble-tls\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683504 kubelet[2544]: I0904 17:31:35.683266 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-config-path\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683504 kubelet[2544]: I0904 17:31:35.683286 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cni-path\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683697 kubelet[2544]: I0904 17:31:35.683302 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-bpf-maps\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683697 kubelet[2544]: I0904 17:31:35.683318 2544 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-cgroup\") pod \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\" (UID: \"3abd6f42-0e27-40ca-87ad-80c60d7b5b43\") " Sep 4 17:31:35.683697 kubelet[2544]: I0904 17:31:35.683357 2544 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.683697 kubelet[2544]: I0904 17:31:35.683405 2544 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.683697 kubelet[2544]: I0904 17:31:35.683425 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683697 kubelet[2544]: I0904 17:31:35.683441 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683892 kubelet[2544]: I0904 17:31:35.683455 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hostproc" (OuterVolumeSpecName: "hostproc") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683892 kubelet[2544]: I0904 17:31:35.683467 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683892 kubelet[2544]: I0904 17:31:35.683480 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683892 kubelet[2544]: I0904 17:31:35.683707 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.683892 kubelet[2544]: I0904 17:31:35.683728 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cni-path" (OuterVolumeSpecName: "cni-path") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.684061 kubelet[2544]: I0904 17:31:35.683979 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:31:35.687016 kubelet[2544]: I0904 17:31:35.686979 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:31:35.687092 kubelet[2544]: I0904 17:31:35.687070 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:31:35.687144 kubelet[2544]: I0904 17:31:35.687124 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:31:35.687307 kubelet[2544]: I0904 17:31:35.687282 2544 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-kube-api-access-gnrz9" (OuterVolumeSpecName: "kube-api-access-gnrz9") pod "3abd6f42-0e27-40ca-87ad-80c60d7b5b43" (UID: "3abd6f42-0e27-40ca-87ad-80c60d7b5b43"). InnerVolumeSpecName "kube-api-access-gnrz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:31:35.784038 kubelet[2544]: I0904 17:31:35.783987 2544 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784038 kubelet[2544]: I0904 17:31:35.784037 2544 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784038 kubelet[2544]: I0904 17:31:35.784051 2544 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784038 kubelet[2544]: I0904 17:31:35.784063 2544 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784075 2544 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784086 2544 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784097 2544 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gnrz9\" (UniqueName: \"kubernetes.io/projected/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-kube-api-access-gnrz9\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784108 2544 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784118 2544 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784130 2544 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784140 2544 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.784326 kubelet[2544]: I0904 17:31:35.784150 2544 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3abd6f42-0e27-40ca-87ad-80c60d7b5b43-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:31:35.915897 kubelet[2544]: I0904 17:31:35.915811 2544 scope.go:117] "RemoveContainer" containerID="55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6" Sep 4 17:31:35.916770 containerd[1454]: time="2024-09-04T17:31:35.916730990Z" level=info msg="RemoveContainer for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\"" Sep 4 17:31:35.922704 systemd[1]: Removed slice kubepods-besteffort-pod63718805_1681_4def_93f1_2666f88d400b.slice - libcontainer container kubepods-besteffort-pod63718805_1681_4def_93f1_2666f88d400b.slice. Sep 4 17:31:35.925759 systemd[1]: Removed slice kubepods-burstable-pod3abd6f42_0e27_40ca_87ad_80c60d7b5b43.slice - libcontainer container kubepods-burstable-pod3abd6f42_0e27_40ca_87ad_80c60d7b5b43.slice. Sep 4 17:31:35.925841 systemd[1]: kubepods-burstable-pod3abd6f42_0e27_40ca_87ad_80c60d7b5b43.slice: Consumed 7.207s CPU time. Sep 4 17:31:35.956019 containerd[1454]: time="2024-09-04T17:31:35.955964407Z" level=info msg="RemoveContainer for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" returns successfully" Sep 4 17:31:35.956296 kubelet[2544]: I0904 17:31:35.956270 2544 scope.go:117] "RemoveContainer" containerID="55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6" Sep 4 17:31:35.956665 containerd[1454]: time="2024-09-04T17:31:35.956611319Z" level=error msg="ContainerStatus for \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\": not found" Sep 4 17:31:35.966822 kubelet[2544]: E0904 17:31:35.965603 2544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\": not found" containerID="55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6" Sep 4 17:31:35.966822 kubelet[2544]: I0904 17:31:35.965646 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6"} err="failed to get container status \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"55d8a7b75a2fa48b6d10ab69dbc190eaece160ada9cfca17989a0d97341792a6\": not found" Sep 4 17:31:35.966822 kubelet[2544]: I0904 17:31:35.965764 2544 scope.go:117] "RemoveContainer" containerID="ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b" Sep 4 17:31:35.968515 containerd[1454]: time="2024-09-04T17:31:35.968480237Z" level=info msg="RemoveContainer for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\"" Sep 4 17:31:35.971992 containerd[1454]: time="2024-09-04T17:31:35.971966959Z" level=info msg="RemoveContainer for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" returns successfully" Sep 4 17:31:35.972126 kubelet[2544]: I0904 17:31:35.972103 2544 scope.go:117] "RemoveContainer" containerID="12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520" Sep 4 17:31:35.972868 containerd[1454]: time="2024-09-04T17:31:35.972826479Z" level=info msg="RemoveContainer for \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\"" Sep 4 17:31:35.976170 containerd[1454]: time="2024-09-04T17:31:35.976142042Z" level=info msg="RemoveContainer for \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\" returns successfully" Sep 4 17:31:35.976330 kubelet[2544]: I0904 17:31:35.976261 2544 scope.go:117] "RemoveContainer" containerID="83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236" Sep 4 17:31:35.977020 containerd[1454]: time="2024-09-04T17:31:35.976999799Z" level=info msg="RemoveContainer for \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\"" Sep 4 17:31:35.980347 containerd[1454]: time="2024-09-04T17:31:35.980316224Z" level=info msg="RemoveContainer for \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\" returns successfully" Sep 4 17:31:35.980473 kubelet[2544]: I0904 17:31:35.980448 2544 scope.go:117] "RemoveContainer" containerID="9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef" Sep 4 17:31:35.981243 containerd[1454]: time="2024-09-04T17:31:35.981212213Z" level=info msg="RemoveContainer for \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\"" Sep 4 17:31:35.984342 containerd[1454]: time="2024-09-04T17:31:35.984323775Z" level=info msg="RemoveContainer for \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\" returns successfully" Sep 4 17:31:35.984482 kubelet[2544]: I0904 17:31:35.984465 2544 scope.go:117] "RemoveContainer" containerID="4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44" Sep 4 17:31:35.985173 containerd[1454]: time="2024-09-04T17:31:35.985152435Z" level=info msg="RemoveContainer for \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\"" Sep 4 17:31:35.988467 containerd[1454]: time="2024-09-04T17:31:35.988443812Z" level=info msg="RemoveContainer for \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\" returns successfully" Sep 4 17:31:35.988575 kubelet[2544]: I0904 17:31:35.988559 2544 scope.go:117] "RemoveContainer" containerID="ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b" Sep 4 17:31:35.988731 containerd[1454]: time="2024-09-04T17:31:35.988691357Z" level=error msg="ContainerStatus for \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\": not found" Sep 4 17:31:35.988808 kubelet[2544]: E0904 17:31:35.988789 2544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\": not found" containerID="ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b" Sep 4 17:31:35.988841 kubelet[2544]: I0904 17:31:35.988817 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b"} err="failed to get container status \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ace44b200edc4616a651a1ea8cffb21e3f8f5c8227cfdd02ff4bdaca6598d49b\": not found" Sep 4 17:31:35.988841 kubelet[2544]: I0904 17:31:35.988836 2544 scope.go:117] "RemoveContainer" containerID="12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520" Sep 4 17:31:35.989006 containerd[1454]: time="2024-09-04T17:31:35.988975673Z" level=error msg="ContainerStatus for \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\": not found" Sep 4 17:31:35.989070 kubelet[2544]: E0904 17:31:35.989052 2544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\": not found" containerID="12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520" Sep 4 17:31:35.989070 kubelet[2544]: I0904 17:31:35.989068 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520"} err="failed to get container status \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\": rpc error: code = NotFound desc = an error occurred when try to find container \"12fb89027939ed1fcc99471bf40063db831b1775d1ec2aeaa8fbaa5a29eac520\": not found" Sep 4 17:31:35.989174 kubelet[2544]: I0904 17:31:35.989079 2544 scope.go:117] "RemoveContainer" containerID="83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236" Sep 4 17:31:35.989241 containerd[1454]: time="2024-09-04T17:31:35.989213579Z" level=error msg="ContainerStatus for \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\": not found" Sep 4 17:31:35.989337 kubelet[2544]: E0904 17:31:35.989315 2544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\": not found" containerID="83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236" Sep 4 17:31:35.989411 kubelet[2544]: I0904 17:31:35.989343 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236"} err="failed to get container status \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\": rpc error: code = NotFound desc = an error occurred when try to find container \"83be9eea4a8632057d5350e7eed9ba9eee1c587993a17c42e13ed5f229b55236\": not found" Sep 4 17:31:35.989411 kubelet[2544]: I0904 17:31:35.989356 2544 scope.go:117] "RemoveContainer" containerID="9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef" Sep 4 17:31:35.989546 containerd[1454]: time="2024-09-04T17:31:35.989514266Z" level=error msg="ContainerStatus for \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\": not found" Sep 4 17:31:35.989649 kubelet[2544]: E0904 17:31:35.989624 2544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\": not found" containerID="9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef" Sep 4 17:31:35.989702 kubelet[2544]: I0904 17:31:35.989647 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef"} err="failed to get container status \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ffa3a4e30df6ec26360bd3011a5eac236fe0492c4bbd88668d098de53b7b7ef\": not found" Sep 4 17:31:35.989702 kubelet[2544]: I0904 17:31:35.989665 2544 scope.go:117] "RemoveContainer" containerID="4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44" Sep 4 17:31:35.990719 containerd[1454]: time="2024-09-04T17:31:35.990694091Z" level=error msg="ContainerStatus for \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\": not found" Sep 4 17:31:35.990821 kubelet[2544]: E0904 17:31:35.990798 2544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\": not found" containerID="4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44" Sep 4 17:31:35.990850 kubelet[2544]: I0904 17:31:35.990820 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44"} err="failed to get container status \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a51b7a963ea7202ad76b33eda12a9a4cf66a250f6a69f5673faccc5b93f4b44\": not found" Sep 4 17:31:36.201607 systemd[1]: var-lib-kubelet-pods-63718805\x2d1681\x2d4def\x2d93f1\x2d2666f88d400b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwc4p.mount: Deactivated successfully. Sep 4 17:31:36.201726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379aa5d36df4dc71e33ec88f5244bba117b292e5c71c00b7b2f3c8acb39a323b-rootfs.mount: Deactivated successfully. Sep 4 17:31:36.201817 systemd[1]: var-lib-kubelet-pods-3abd6f42\x2d0e27\x2d40ca\x2d87ad\x2d80c60d7b5b43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgnrz9.mount: Deactivated successfully. Sep 4 17:31:36.201913 systemd[1]: var-lib-kubelet-pods-3abd6f42\x2d0e27\x2d40ca\x2d87ad\x2d80c60d7b5b43-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:31:36.201997 systemd[1]: var-lib-kubelet-pods-3abd6f42\x2d0e27\x2d40ca\x2d87ad\x2d80c60d7b5b43-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:31:37.135659 sshd[4180]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:37.144365 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:33870.service: Deactivated successfully. Sep 4 17:31:37.146307 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:31:37.147884 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:31:37.149229 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:35256.service - OpenSSH per-connection server daemon (10.0.0.1:35256). Sep 4 17:31:37.150057 systemd-logind[1437]: Removed session 24. Sep 4 17:31:37.153239 kubelet[2544]: I0904 17:31:37.153209 2544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" path="/var/lib/kubelet/pods/3abd6f42-0e27-40ca-87ad-80c60d7b5b43/volumes" Sep 4 17:31:37.154277 kubelet[2544]: I0904 17:31:37.154256 2544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63718805-1681-4def-93f1-2666f88d400b" path="/var/lib/kubelet/pods/63718805-1681-4def-93f1-2666f88d400b/volumes" Sep 4 17:31:37.180897 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 35256 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:37.182651 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:37.186864 systemd-logind[1437]: New session 25 of user core. Sep 4 17:31:37.195559 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:31:37.196361 kubelet[2544]: E0904 17:31:37.196318 2544 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:31:37.654646 sshd[4343]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:37.665191 kubelet[2544]: I0904 17:31:37.665156 2544 topology_manager.go:215] "Topology Admit Handler" podUID="d8c73939-bdbc-4a02-84e3-e100fc1f439d" podNamespace="kube-system" podName="cilium-2wl78" Sep 4 17:31:37.665962 kubelet[2544]: E0904 17:31:37.665881 2544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63718805-1681-4def-93f1-2666f88d400b" containerName="cilium-operator" Sep 4 17:31:37.665962 kubelet[2544]: E0904 17:31:37.665900 2544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" containerName="clean-cilium-state" Sep 4 17:31:37.665962 kubelet[2544]: E0904 17:31:37.665907 2544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" containerName="cilium-agent" Sep 4 17:31:37.665962 kubelet[2544]: E0904 17:31:37.665914 2544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" containerName="apply-sysctl-overwrites" Sep 4 17:31:37.665962 kubelet[2544]: E0904 17:31:37.665920 2544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" containerName="mount-bpf-fs" Sep 4 17:31:37.665962 kubelet[2544]: E0904 17:31:37.665926 2544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" containerName="mount-cgroup" Sep 4 17:31:37.665962 kubelet[2544]: I0904 17:31:37.665952 2544 memory_manager.go:354] "RemoveStaleState removing state" podUID="3abd6f42-0e27-40ca-87ad-80c60d7b5b43" containerName="cilium-agent" Sep 4 17:31:37.665962 kubelet[2544]: I0904 17:31:37.665958 2544 memory_manager.go:354] "RemoveStaleState removing state" podUID="63718805-1681-4def-93f1-2666f88d400b" containerName="cilium-operator" Sep 4 17:31:37.670628 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:35256.service: Deactivated successfully. Sep 4 17:31:37.672727 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:31:37.673825 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:31:37.681611 systemd-logind[1437]: Removed session 25. Sep 4 17:31:37.688783 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:35272.service - OpenSSH per-connection server daemon (10.0.0.1:35272). Sep 4 17:31:37.695393 kubelet[2544]: I0904 17:31:37.694440 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-lib-modules\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695393 kubelet[2544]: I0904 17:31:37.694471 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-xtables-lock\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695393 kubelet[2544]: I0904 17:31:37.694487 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldmlx\" (UniqueName: \"kubernetes.io/projected/d8c73939-bdbc-4a02-84e3-e100fc1f439d-kube-api-access-ldmlx\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695393 kubelet[2544]: I0904 17:31:37.694502 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-cilium-run\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695393 kubelet[2544]: I0904 17:31:37.694515 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-hostproc\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695393 kubelet[2544]: I0904 17:31:37.694528 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-etc-cni-netd\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695227 systemd[1]: Created slice kubepods-burstable-podd8c73939_bdbc_4a02_84e3_e100fc1f439d.slice - libcontainer container kubepods-burstable-podd8c73939_bdbc_4a02_84e3_e100fc1f439d.slice. Sep 4 17:31:37.695675 kubelet[2544]: I0904 17:31:37.694543 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-cni-path\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695675 kubelet[2544]: I0904 17:31:37.694559 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c73939-bdbc-4a02-84e3-e100fc1f439d-hubble-tls\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695675 kubelet[2544]: I0904 17:31:37.694587 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-bpf-maps\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695675 kubelet[2544]: I0904 17:31:37.694615 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c73939-bdbc-4a02-84e3-e100fc1f439d-cilium-config-path\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695675 kubelet[2544]: I0904 17:31:37.694633 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-host-proc-sys-net\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695675 kubelet[2544]: I0904 17:31:37.694654 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8c73939-bdbc-4a02-84e3-e100fc1f439d-cilium-ipsec-secrets\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695800 kubelet[2544]: I0904 17:31:37.694678 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c73939-bdbc-4a02-84e3-e100fc1f439d-clustermesh-secrets\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695800 kubelet[2544]: I0904 17:31:37.694696 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-cilium-cgroup\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.695800 kubelet[2544]: I0904 17:31:37.694718 2544 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c73939-bdbc-4a02-84e3-e100fc1f439d-host-proc-sys-kernel\") pod \"cilium-2wl78\" (UID: \"d8c73939-bdbc-4a02-84e3-e100fc1f439d\") " pod="kube-system/cilium-2wl78" Sep 4 17:31:37.719522 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 35272 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:37.721296 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:37.725442 systemd-logind[1437]: New session 26 of user core. Sep 4 17:31:37.737631 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:31:37.789706 sshd[4356]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:37.813813 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:35272.service: Deactivated successfully. Sep 4 17:31:37.816173 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:31:37.817891 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:31:37.826713 systemd[1]: Started sshd@26-10.0.0.130:22-10.0.0.1:35284.service - OpenSSH per-connection server daemon (10.0.0.1:35284). Sep 4 17:31:37.827759 systemd-logind[1437]: Removed session 26. Sep 4 17:31:37.855747 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 35284 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:31:37.857438 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:37.862119 systemd-logind[1437]: New session 27 of user core. Sep 4 17:31:37.877565 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:31:37.998459 kubelet[2544]: E0904 17:31:37.998424 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:37.999153 containerd[1454]: time="2024-09-04T17:31:37.998949918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wl78,Uid:d8c73939-bdbc-4a02-84e3-e100fc1f439d,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:38.021196 containerd[1454]: time="2024-09-04T17:31:38.021105149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:38.021196 containerd[1454]: time="2024-09-04T17:31:38.021169101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:38.021196 containerd[1454]: time="2024-09-04T17:31:38.021186785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:38.021405 containerd[1454]: time="2024-09-04T17:31:38.021203628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:38.047542 systemd[1]: Started cri-containerd-eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc.scope - libcontainer container eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc. Sep 4 17:31:38.070107 containerd[1454]: time="2024-09-04T17:31:38.070065173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wl78,Uid:d8c73939-bdbc-4a02-84e3-e100fc1f439d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\"" Sep 4 17:31:38.070816 kubelet[2544]: E0904 17:31:38.070795 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:38.072805 containerd[1454]: time="2024-09-04T17:31:38.072742553Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:31:38.087719 containerd[1454]: time="2024-09-04T17:31:38.087665445Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b\"" Sep 4 17:31:38.088095 containerd[1454]: time="2024-09-04T17:31:38.088061453Z" level=info msg="StartContainer for \"82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b\"" Sep 4 17:31:38.115517 systemd[1]: Started cri-containerd-82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b.scope - libcontainer container 82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b. Sep 4 17:31:38.141395 containerd[1454]: time="2024-09-04T17:31:38.141320524Z" level=info msg="StartContainer for \"82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b\" returns successfully" Sep 4 17:31:38.152450 systemd[1]: cri-containerd-82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b.scope: Deactivated successfully. Sep 4 17:31:38.187336 containerd[1454]: time="2024-09-04T17:31:38.187266835Z" level=info msg="shim disconnected" id=82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b namespace=k8s.io Sep 4 17:31:38.187336 containerd[1454]: time="2024-09-04T17:31:38.187323113Z" level=warning msg="cleaning up after shim disconnected" id=82c77d9dbfcaa189e16fcba4c07b2a930f6e39107a70d9eca86b6cc2b55bb93b namespace=k8s.io Sep 4 17:31:38.187336 containerd[1454]: time="2024-09-04T17:31:38.187331528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:38.754248 kubelet[2544]: I0904 17:31:38.754201 2544 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:31:38Z","lastTransitionTime":"2024-09-04T17:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:31:38.928196 kubelet[2544]: E0904 17:31:38.928169 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:38.931866 containerd[1454]: time="2024-09-04T17:31:38.931819696Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:31:38.947668 containerd[1454]: time="2024-09-04T17:31:38.947621250Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d\"" Sep 4 17:31:38.948098 containerd[1454]: time="2024-09-04T17:31:38.948075741Z" level=info msg="StartContainer for \"bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d\"" Sep 4 17:31:38.974566 systemd[1]: Started cri-containerd-bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d.scope - libcontainer container bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d. Sep 4 17:31:38.999516 containerd[1454]: time="2024-09-04T17:31:38.999472184Z" level=info msg="StartContainer for \"bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d\" returns successfully" Sep 4 17:31:39.004775 systemd[1]: cri-containerd-bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d.scope: Deactivated successfully. Sep 4 17:31:39.027535 containerd[1454]: time="2024-09-04T17:31:39.027469479Z" level=info msg="shim disconnected" id=bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d namespace=k8s.io Sep 4 17:31:39.027535 containerd[1454]: time="2024-09-04T17:31:39.027531808Z" level=warning msg="cleaning up after shim disconnected" id=bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d namespace=k8s.io Sep 4 17:31:39.027535 containerd[1454]: time="2024-09-04T17:31:39.027541836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:39.804293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbf2ab6775c99b86bab9d180f0aea13b1816748c95886b72756518ae7cef9f3d-rootfs.mount: Deactivated successfully. Sep 4 17:31:39.931577 kubelet[2544]: E0904 17:31:39.931550 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:39.933064 containerd[1454]: time="2024-09-04T17:31:39.933028391Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:31:39.954159 containerd[1454]: time="2024-09-04T17:31:39.954107126Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7\"" Sep 4 17:31:39.954708 containerd[1454]: time="2024-09-04T17:31:39.954652740Z" level=info msg="StartContainer for \"5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7\"" Sep 4 17:31:39.986573 systemd[1]: Started cri-containerd-5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7.scope - libcontainer container 5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7. Sep 4 17:31:40.019751 containerd[1454]: time="2024-09-04T17:31:40.019711435Z" level=info msg="StartContainer for \"5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7\" returns successfully" Sep 4 17:31:40.022567 systemd[1]: cri-containerd-5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7.scope: Deactivated successfully. Sep 4 17:31:40.048505 containerd[1454]: time="2024-09-04T17:31:40.048430630Z" level=info msg="shim disconnected" id=5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7 namespace=k8s.io Sep 4 17:31:40.048505 containerd[1454]: time="2024-09-04T17:31:40.048494313Z" level=warning msg="cleaning up after shim disconnected" id=5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7 namespace=k8s.io Sep 4 17:31:40.048505 containerd[1454]: time="2024-09-04T17:31:40.048505414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:40.804146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e6a07c5fd44c50cf2338cef33818ac800e03fc152b448b964945e555207e0d7-rootfs.mount: Deactivated successfully. Sep 4 17:31:40.934984 kubelet[2544]: E0904 17:31:40.934937 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:40.937886 containerd[1454]: time="2024-09-04T17:31:40.937825000Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:31:40.952762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272113165.mount: Deactivated successfully. Sep 4 17:31:40.966598 containerd[1454]: time="2024-09-04T17:31:40.966538475Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613\"" Sep 4 17:31:40.967994 containerd[1454]: time="2024-09-04T17:31:40.967083388Z" level=info msg="StartContainer for \"5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613\"" Sep 4 17:31:41.000660 systemd[1]: Started cri-containerd-5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613.scope - libcontainer container 5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613. Sep 4 17:31:41.026568 systemd[1]: cri-containerd-5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613.scope: Deactivated successfully. Sep 4 17:31:41.029956 containerd[1454]: time="2024-09-04T17:31:41.029911730Z" level=info msg="StartContainer for \"5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613\" returns successfully" Sep 4 17:31:41.057544 containerd[1454]: time="2024-09-04T17:31:41.057356161Z" level=info msg="shim disconnected" id=5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613 namespace=k8s.io Sep 4 17:31:41.057544 containerd[1454]: time="2024-09-04T17:31:41.057442405Z" level=warning msg="cleaning up after shim disconnected" id=5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613 namespace=k8s.io Sep 4 17:31:41.057544 containerd[1454]: time="2024-09-04T17:31:41.057462424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:41.803925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5282909ca9eff4ec3b9597bb2b504bad567377fb1c586bf4f398b84a2c6c0613-rootfs.mount: Deactivated successfully. Sep 4 17:31:41.938702 kubelet[2544]: E0904 17:31:41.938654 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:41.941269 containerd[1454]: time="2024-09-04T17:31:41.941229013Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:31:41.960313 containerd[1454]: time="2024-09-04T17:31:41.960264207Z" level=info msg="CreateContainer within sandbox \"eff1ac81b9b5cf45102456ff872a3b6b6221104b8fc03b5b6067d0ac47d0b2fc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e694711f4934d0332b2f1989f63b3610469f7e87115bc8b0a618681db5bd4cb\"" Sep 4 17:31:41.960826 containerd[1454]: time="2024-09-04T17:31:41.960801906Z" level=info msg="StartContainer for \"7e694711f4934d0332b2f1989f63b3610469f7e87115bc8b0a618681db5bd4cb\"" Sep 4 17:31:41.993586 systemd[1]: Started cri-containerd-7e694711f4934d0332b2f1989f63b3610469f7e87115bc8b0a618681db5bd4cb.scope - libcontainer container 7e694711f4934d0332b2f1989f63b3610469f7e87115bc8b0a618681db5bd4cb. Sep 4 17:31:42.026356 containerd[1454]: time="2024-09-04T17:31:42.026289538Z" level=info msg="StartContainer for \"7e694711f4934d0332b2f1989f63b3610469f7e87115bc8b0a618681db5bd4cb\" returns successfully" Sep 4 17:31:42.151140 kubelet[2544]: E0904 17:31:42.151007 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:42.449404 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 17:31:42.946329 kubelet[2544]: E0904 17:31:42.946298 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:42.957357 kubelet[2544]: I0904 17:31:42.957290 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wl78" podStartSLOduration=5.957272346 podStartE2EDuration="5.957272346s" podCreationTimestamp="2024-09-04 17:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:42.956548702 +0000 UTC m=+85.890839790" watchObservedRunningTime="2024-09-04 17:31:42.957272346 +0000 UTC m=+85.891563394" Sep 4 17:31:43.999635 kubelet[2544]: E0904 17:31:43.999592 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:45.798110 systemd-networkd[1385]: lxc_health: Link UP Sep 4 17:31:45.808350 systemd-networkd[1385]: lxc_health: Gained carrier Sep 4 17:31:46.000789 kubelet[2544]: E0904 17:31:46.000743 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:46.152103 kubelet[2544]: E0904 17:31:46.150920 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:46.953110 kubelet[2544]: E0904 17:31:46.952904 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:47.528730 systemd-networkd[1385]: lxc_health: Gained IPv6LL Sep 4 17:31:47.954128 kubelet[2544]: E0904 17:31:47.953992 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:49.151657 kubelet[2544]: E0904 17:31:49.151609 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:52.715588 sshd[4368]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:52.719110 systemd[1]: sshd@26-10.0.0.130:22-10.0.0.1:35284.service: Deactivated successfully. Sep 4 17:31:52.720942 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:31:52.721592 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:31:52.722405 systemd-logind[1437]: Removed session 27.