Jun 25 20:51:34.038274 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 20:51:34.038323 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 20:51:34.038337 kernel: BIOS-provided physical RAM map: Jun 25 20:51:34.038352 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 20:51:34.038373 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 20:51:34.038382 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 20:51:34.038393 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jun 25 20:51:34.038403 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jun 25 20:51:34.038412 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 25 20:51:34.038434 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 25 20:51:34.038444 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 20:51:34.038453 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 20:51:34.038479 kernel: NX (Execute Disable) protection: active Jun 25 20:51:34.038502 kernel: APIC: Static calls initialized Jun 25 20:51:34.038514 kernel: SMBIOS 2.8 present. Jun 25 20:51:34.038525 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jun 25 20:51:34.038536 kernel: Hypervisor detected: KVM Jun 25 20:51:34.038551 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 20:51:34.038562 kernel: kvm-clock: using sched offset of 4364571304 cycles Jun 25 20:51:34.038573 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 20:51:34.038585 kernel: tsc: Detected 2499.998 MHz processor Jun 25 20:51:34.038596 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 20:51:34.038607 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 20:51:34.038618 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jun 25 20:51:34.038629 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 20:51:34.038639 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 20:51:34.038655 kernel: Using GB pages for direct mapping Jun 25 20:51:34.038666 kernel: ACPI: Early table checksum verification disabled Jun 25 20:51:34.038676 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jun 25 20:51:34.038687 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038698 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038709 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038720 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jun 25 20:51:34.038731 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038742 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038757 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038768 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 20:51:34.038778 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jun 25 20:51:34.038789 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jun 25 20:51:34.038800 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jun 25 20:51:34.038864 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jun 25 20:51:34.038879 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jun 25 20:51:34.038896 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jun 25 20:51:34.038907 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jun 25 20:51:34.038919 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 20:51:34.038930 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 20:51:34.038942 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jun 25 20:51:34.038953 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jun 25 20:51:34.038964 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jun 25 20:51:34.038976 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jun 25 20:51:34.038991 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jun 25 20:51:34.039003 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jun 25 20:51:34.039014 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jun 25 20:51:34.039025 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jun 25 20:51:34.039037 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jun 25 20:51:34.039048 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jun 25 20:51:34.039059 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jun 25 20:51:34.039070 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jun 25 20:51:34.039081 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jun 25 20:51:34.039097 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jun 25 20:51:34.039108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 20:51:34.039120 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 25 20:51:34.039131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jun 25 20:51:34.039143 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jun 25 20:51:34.039154 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jun 25 20:51:34.039166 kernel: Zone ranges: Jun 25 20:51:34.039178 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 20:51:34.039189 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jun 25 20:51:34.039205 kernel: Normal empty Jun 25 20:51:34.039216 kernel: Movable zone start for each node Jun 25 20:51:34.039228 kernel: Early memory node ranges Jun 25 20:51:34.039239 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 20:51:34.039251 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jun 25 20:51:34.039262 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jun 25 20:51:34.039273 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 20:51:34.039285 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 20:51:34.039296 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jun 25 20:51:34.039307 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 20:51:34.039323 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 20:51:34.039335 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 20:51:34.039346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 20:51:34.039358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 20:51:34.039369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 20:51:34.039381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 20:51:34.039392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 20:51:34.039404 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 20:51:34.039415 kernel: TSC deadline timer available Jun 25 20:51:34.039431 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jun 25 20:51:34.039442 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 20:51:34.039465 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 25 20:51:34.039479 kernel: Booting paravirtualized kernel on KVM Jun 25 20:51:34.039491 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 20:51:34.039503 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jun 25 20:51:34.039514 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Jun 25 20:51:34.039526 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Jun 25 20:51:34.039537 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jun 25 20:51:34.039554 kernel: kvm-guest: PV spinlocks enabled Jun 25 20:51:34.039566 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 20:51:34.039579 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 20:51:34.039591 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 20:51:34.039602 kernel: random: crng init done Jun 25 20:51:34.039614 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 20:51:34.039625 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 20:51:34.039637 kernel: Fallback order for Node 0: 0 Jun 25 20:51:34.039653 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jun 25 20:51:34.039664 kernel: Policy zone: DMA32 Jun 25 20:51:34.039676 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 20:51:34.039687 kernel: software IO TLB: area num 16. Jun 25 20:51:34.039699 kernel: Memory: 1895384K/2096616K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 200972K reserved, 0K cma-reserved) Jun 25 20:51:34.039711 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jun 25 20:51:34.039722 kernel: Kernel/User page tables isolation: enabled Jun 25 20:51:34.039734 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 20:51:34.039745 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 20:51:34.039761 kernel: Dynamic Preempt: voluntary Jun 25 20:51:34.039773 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 20:51:34.039785 kernel: rcu: RCU event tracing is enabled. Jun 25 20:51:34.039809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jun 25 20:51:34.039836 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 20:51:34.039861 kernel: Rude variant of Tasks RCU enabled. Jun 25 20:51:34.039877 kernel: Tracing variant of Tasks RCU enabled. Jun 25 20:51:34.039890 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 20:51:34.039902 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jun 25 20:51:34.039914 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jun 25 20:51:34.039926 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 20:51:34.039938 kernel: Console: colour VGA+ 80x25 Jun 25 20:51:34.039954 kernel: printk: console [tty0] enabled Jun 25 20:51:34.039967 kernel: printk: console [ttyS0] enabled Jun 25 20:51:34.039979 kernel: ACPI: Core revision 20230628 Jun 25 20:51:34.039991 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 20:51:34.040003 kernel: x2apic enabled Jun 25 20:51:34.040019 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 20:51:34.040032 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 25 20:51:34.040044 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jun 25 20:51:34.040056 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 20:51:34.040068 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 20:51:34.040080 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 20:51:34.040092 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 20:51:34.040104 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 20:51:34.040116 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 20:51:34.040132 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 20:51:34.040144 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 25 20:51:34.040156 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 20:51:34.040168 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 20:51:34.040180 kernel: MDS: Mitigation: Clear CPU buffers Jun 25 20:51:34.040192 kernel: MMIO Stale Data: Unknown: No mitigations Jun 25 20:51:34.040204 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 25 20:51:34.040216 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 20:51:34.040228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 20:51:34.040240 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 20:51:34.040252 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 20:51:34.040268 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 25 20:51:34.040280 kernel: Freeing SMP alternatives memory: 32K Jun 25 20:51:34.040292 kernel: pid_max: default: 32768 minimum: 301 Jun 25 20:51:34.040304 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 20:51:34.040316 kernel: SELinux: Initializing. Jun 25 20:51:34.040328 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 20:51:34.040340 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 20:51:34.040352 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jun 25 20:51:34.040364 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Jun 25 20:51:34.040376 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Jun 25 20:51:34.040388 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Jun 25 20:51:34.040405 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jun 25 20:51:34.040417 kernel: signal: max sigframe size: 1776 Jun 25 20:51:34.040429 kernel: rcu: Hierarchical SRCU implementation. Jun 25 20:51:34.040441 kernel: rcu: Max phase no-delay instances is 400. Jun 25 20:51:34.040464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 20:51:34.040478 kernel: smp: Bringing up secondary CPUs ... Jun 25 20:51:34.040490 kernel: smpboot: x86: Booting SMP configuration: Jun 25 20:51:34.040502 kernel: .... node #0, CPUs: #1 Jun 25 20:51:34.040514 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jun 25 20:51:34.040532 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 20:51:34.040544 kernel: smpboot: Max logical packages: 16 Jun 25 20:51:34.040556 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jun 25 20:51:34.040569 kernel: devtmpfs: initialized Jun 25 20:51:34.040580 kernel: x86/mm: Memory block size: 128MB Jun 25 20:51:34.040593 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 20:51:34.040605 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jun 25 20:51:34.040617 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 20:51:34.040629 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 20:51:34.040646 kernel: audit: initializing netlink subsys (disabled) Jun 25 20:51:34.040658 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 20:51:34.040670 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 20:51:34.040682 kernel: audit: type=2000 audit(1719348692.524:1): state=initialized audit_enabled=0 res=1 Jun 25 20:51:34.040693 kernel: cpuidle: using governor menu Jun 25 20:51:34.040705 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 20:51:34.040717 kernel: dca service started, version 1.12.1 Jun 25 20:51:34.040730 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jun 25 20:51:34.040742 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jun 25 20:51:34.040770 kernel: PCI: Using configuration type 1 for base access Jun 25 20:51:34.040781 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 20:51:34.040792 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 20:51:34.040803 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 20:51:34.040843 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 20:51:34.040856 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 20:51:34.040867 kernel: ACPI: Added _OSI(Module Device) Jun 25 20:51:34.040878 kernel: ACPI: Added _OSI(Processor Device) Jun 25 20:51:34.040901 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 20:51:34.040923 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 20:51:34.040934 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 20:51:34.040945 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 20:51:34.040970 kernel: ACPI: Interpreter enabled Jun 25 20:51:34.040984 kernel: ACPI: PM: (supports S0 S5) Jun 25 20:51:34.040996 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 20:51:34.041008 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 20:51:34.041020 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 20:51:34.041032 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 25 20:51:34.041049 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 20:51:34.041300 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 20:51:34.041488 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 20:51:34.041649 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 20:51:34.041669 kernel: PCI host bridge to bus 0000:00 Jun 25 20:51:34.041883 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 20:51:34.042038 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 20:51:34.042203 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 20:51:34.042346 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jun 25 20:51:34.042506 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 25 20:51:34.042654 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jun 25 20:51:34.042831 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 20:51:34.043038 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jun 25 20:51:34.043231 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jun 25 20:51:34.043394 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jun 25 20:51:34.043581 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jun 25 20:51:34.043754 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jun 25 20:51:34.043962 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 20:51:34.044149 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.044327 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jun 25 20:51:34.044546 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.044714 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jun 25 20:51:34.044946 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.045103 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jun 25 20:51:34.045280 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.045439 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jun 25 20:51:34.045648 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.045883 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jun 25 20:51:34.046067 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.046236 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jun 25 20:51:34.046402 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.046576 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jun 25 20:51:34.046765 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jun 25 20:51:34.046964 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jun 25 20:51:34.047139 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 20:51:34.047307 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jun 25 20:51:34.047519 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jun 25 20:51:34.047685 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jun 25 20:51:34.047881 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jun 25 20:51:34.048073 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 20:51:34.048240 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 20:51:34.048396 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jun 25 20:51:34.048567 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jun 25 20:51:34.048734 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jun 25 20:51:34.048952 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 25 20:51:34.049136 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jun 25 20:51:34.049322 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jun 25 20:51:34.049496 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jun 25 20:51:34.049667 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jun 25 20:51:34.049906 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jun 25 20:51:34.050120 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jun 25 20:51:34.050285 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jun 25 20:51:34.050470 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jun 25 20:51:34.050633 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jun 25 20:51:34.050791 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 25 20:51:34.051007 kernel: pci_bus 0000:02: extended config space not accessible Jun 25 20:51:34.051185 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jun 25 20:51:34.051357 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jun 25 20:51:34.051563 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jun 25 20:51:34.051727 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 25 20:51:34.051948 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jun 25 20:51:34.052107 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jun 25 20:51:34.052276 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jun 25 20:51:34.052433 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jun 25 20:51:34.052606 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 25 20:51:34.052791 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jun 25 20:51:34.053022 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jun 25 20:51:34.053193 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jun 25 20:51:34.053374 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jun 25 20:51:34.053549 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 25 20:51:34.053710 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jun 25 20:51:34.053924 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jun 25 20:51:34.054084 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 25 20:51:34.054240 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jun 25 20:51:34.054419 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jun 25 20:51:34.054611 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 25 20:51:34.054769 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jun 25 20:51:34.054943 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jun 25 20:51:34.055120 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 25 20:51:34.055311 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jun 25 20:51:34.055491 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jun 25 20:51:34.055660 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 25 20:51:34.055873 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jun 25 20:51:34.056051 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jun 25 20:51:34.056202 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 25 20:51:34.056220 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 20:51:34.056233 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 20:51:34.056245 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 20:51:34.056257 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 20:51:34.056276 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 25 20:51:34.056288 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 25 20:51:34.056300 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 25 20:51:34.056312 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 25 20:51:34.056324 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 25 20:51:34.056335 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 25 20:51:34.056349 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 25 20:51:34.056373 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 25 20:51:34.056385 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 25 20:51:34.056402 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 25 20:51:34.056414 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 25 20:51:34.056427 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 25 20:51:34.056439 kernel: iommu: Default domain type: Translated Jun 25 20:51:34.056451 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 20:51:34.056477 kernel: PCI: Using ACPI for IRQ routing Jun 25 20:51:34.056489 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 20:51:34.056501 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 20:51:34.056513 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jun 25 20:51:34.056673 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 25 20:51:34.056846 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 25 20:51:34.057028 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 20:51:34.057047 kernel: vgaarb: loaded Jun 25 20:51:34.057059 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 20:51:34.057071 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 20:51:34.057083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 20:51:34.057094 kernel: pnp: PnP ACPI init Jun 25 20:51:34.057265 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 25 20:51:34.057285 kernel: pnp: PnP ACPI: found 5 devices Jun 25 20:51:34.057297 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 20:51:34.057309 kernel: NET: Registered PF_INET protocol family Jun 25 20:51:34.057321 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 20:51:34.057333 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 20:51:34.057358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 20:51:34.057370 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 20:51:34.057388 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 20:51:34.057401 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 20:51:34.057413 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 20:51:34.057425 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 20:51:34.057437 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 20:51:34.057449 kernel: NET: Registered PF_XDP protocol family Jun 25 20:51:34.057618 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jun 25 20:51:34.057777 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 25 20:51:34.057992 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 25 20:51:34.058151 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jun 25 20:51:34.058319 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 25 20:51:34.058490 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 25 20:51:34.058649 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 25 20:51:34.059194 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 25 20:51:34.059384 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jun 25 20:51:34.059560 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jun 25 20:51:34.059720 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jun 25 20:51:34.060317 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jun 25 20:51:34.060519 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jun 25 20:51:34.060973 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jun 25 20:51:34.061140 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jun 25 20:51:34.061300 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jun 25 20:51:34.061492 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jun 25 20:51:34.061686 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 25 20:51:34.061877 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jun 25 20:51:34.062067 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jun 25 20:51:34.062220 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jun 25 20:51:34.062384 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 25 20:51:34.062565 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jun 25 20:51:34.062724 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jun 25 20:51:34.064947 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jun 25 20:51:34.065138 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 25 20:51:34.065306 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jun 25 20:51:34.065487 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jun 25 20:51:34.065652 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jun 25 20:51:34.066903 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 25 20:51:34.067093 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jun 25 20:51:34.067286 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jun 25 20:51:34.067477 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jun 25 20:51:34.067639 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 25 20:51:34.068844 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jun 25 20:51:34.069029 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jun 25 20:51:34.069217 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jun 25 20:51:34.069380 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 25 20:51:34.069580 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jun 25 20:51:34.069743 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jun 25 20:51:34.069934 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jun 25 20:51:34.070097 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 25 20:51:34.070257 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jun 25 20:51:34.070418 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jun 25 20:51:34.070602 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jun 25 20:51:34.070763 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 25 20:51:34.072981 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jun 25 20:51:34.073153 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jun 25 20:51:34.073317 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jun 25 20:51:34.073495 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 25 20:51:34.073653 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 20:51:34.073825 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 20:51:34.073979 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 20:51:34.074140 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jun 25 20:51:34.076981 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 25 20:51:34.077134 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jun 25 20:51:34.077299 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jun 25 20:51:34.077464 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jun 25 20:51:34.077638 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jun 25 20:51:34.078852 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jun 25 20:51:34.079047 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jun 25 20:51:34.079204 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jun 25 20:51:34.079358 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 25 20:51:34.079536 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jun 25 20:51:34.079689 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jun 25 20:51:34.080884 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 25 20:51:34.081079 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jun 25 20:51:34.081246 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jun 25 20:51:34.081420 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 25 20:51:34.081604 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jun 25 20:51:34.081757 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jun 25 20:51:34.084967 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 25 20:51:34.085139 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jun 25 20:51:34.085295 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jun 25 20:51:34.085475 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 25 20:51:34.085640 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jun 25 20:51:34.085791 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jun 25 20:51:34.085982 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 25 20:51:34.086175 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jun 25 20:51:34.086327 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jun 25 20:51:34.086503 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 25 20:51:34.086525 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 25 20:51:34.086539 kernel: PCI: CLS 0 bytes, default 64 Jun 25 20:51:34.086552 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 20:51:34.086565 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jun 25 20:51:34.086578 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 20:51:34.086591 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 25 20:51:34.086604 kernel: Initialise system trusted keyrings Jun 25 20:51:34.086617 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 20:51:34.086638 kernel: Key type asymmetric registered Jun 25 20:51:34.086651 kernel: Asymmetric key parser 'x509' registered Jun 25 20:51:34.086664 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 20:51:34.086676 kernel: io scheduler mq-deadline registered Jun 25 20:51:34.086689 kernel: io scheduler kyber registered Jun 25 20:51:34.086702 kernel: io scheduler bfq registered Jun 25 20:51:34.088502 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jun 25 20:51:34.088677 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jun 25 20:51:34.089881 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.090078 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jun 25 20:51:34.090253 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jun 25 20:51:34.090416 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.090597 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jun 25 20:51:34.090759 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jun 25 20:51:34.090957 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.091145 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jun 25 20:51:34.091304 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jun 25 20:51:34.091477 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.091643 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jun 25 20:51:34.091912 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jun 25 20:51:34.095026 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.095213 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jun 25 20:51:34.095403 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jun 25 20:51:34.095580 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.095743 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jun 25 20:51:34.096942 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jun 25 20:51:34.097106 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.097278 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jun 25 20:51:34.097447 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jun 25 20:51:34.097625 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 25 20:51:34.097646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 20:51:34.097660 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 25 20:51:34.097674 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 25 20:51:34.097695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 20:51:34.097708 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 20:51:34.097721 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 20:51:34.097734 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 20:51:34.097747 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 20:51:34.097760 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 20:51:34.099958 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 25 20:51:34.100114 kernel: rtc_cmos 00:03: registered as rtc0 Jun 25 20:51:34.100273 kernel: rtc_cmos 00:03: setting system clock to 2024-06-25T20:51:33 UTC (1719348693) Jun 25 20:51:34.100422 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 25 20:51:34.100442 kernel: intel_pstate: CPU model not supported Jun 25 20:51:34.100479 kernel: NET: Registered PF_INET6 protocol family Jun 25 20:51:34.100493 kernel: Segment Routing with IPv6 Jun 25 20:51:34.100506 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 20:51:34.100519 kernel: NET: Registered PF_PACKET protocol family Jun 25 20:51:34.100532 kernel: Key type dns_resolver registered Jun 25 20:51:34.100544 kernel: IPI shorthand broadcast: enabled Jun 25 20:51:34.100563 kernel: sched_clock: Marking stable (1167003643, 244090859)->(1638804288, -227709786) Jun 25 20:51:34.100576 kernel: registered taskstats version 1 Jun 25 20:51:34.100589 kernel: Loading compiled-in X.509 certificates Jun 25 20:51:34.100602 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 20:51:34.100614 kernel: Key type .fscrypt registered Jun 25 20:51:34.100627 kernel: Key type fscrypt-provisioning registered Jun 25 20:51:34.100640 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 20:51:34.100653 kernel: ima: Allocated hash algorithm: sha1 Jun 25 20:51:34.100670 kernel: ima: No architecture policies found Jun 25 20:51:34.100683 kernel: clk: Disabling unused clocks Jun 25 20:51:34.100696 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 20:51:34.100709 kernel: Write protecting the kernel read-only data: 36864k Jun 25 20:51:34.100722 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 20:51:34.100735 kernel: Run /init as init process Jun 25 20:51:34.100747 kernel: with arguments: Jun 25 20:51:34.100764 kernel: /init Jun 25 20:51:34.100776 kernel: with environment: Jun 25 20:51:34.100788 kernel: HOME=/ Jun 25 20:51:34.100821 kernel: TERM=linux Jun 25 20:51:34.100834 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 20:51:34.100857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 20:51:34.100881 systemd[1]: Detected virtualization kvm. Jun 25 20:51:34.100895 systemd[1]: Detected architecture x86-64. Jun 25 20:51:34.100908 systemd[1]: Running in initrd. Jun 25 20:51:34.100922 systemd[1]: No hostname configured, using default hostname. Jun 25 20:51:34.100941 systemd[1]: Hostname set to . Jun 25 20:51:34.100956 systemd[1]: Initializing machine ID from VM UUID. Jun 25 20:51:34.100970 systemd[1]: Queued start job for default target initrd.target. Jun 25 20:51:34.100993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 20:51:34.101007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 20:51:34.101021 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 20:51:34.101035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 20:51:34.101057 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 20:51:34.101076 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 20:51:34.101092 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 20:51:34.101114 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 20:51:34.101128 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 20:51:34.101141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 20:51:34.101155 systemd[1]: Reached target paths.target - Path Units. Jun 25 20:51:34.101168 systemd[1]: Reached target slices.target - Slice Units. Jun 25 20:51:34.101187 systemd[1]: Reached target swap.target - Swaps. Jun 25 20:51:34.101201 systemd[1]: Reached target timers.target - Timer Units. Jun 25 20:51:34.101214 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 20:51:34.101235 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 20:51:34.101249 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 20:51:34.101263 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 20:51:34.101277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 20:51:34.101290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 20:51:34.101304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 20:51:34.101324 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 20:51:34.101337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 20:51:34.101351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 20:51:34.101365 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 20:51:34.101379 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 20:51:34.101392 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 20:51:34.101406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 20:51:34.101420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 20:51:34.101488 systemd-journald[200]: Collecting audit messages is disabled. Jun 25 20:51:34.101522 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 20:51:34.101536 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 20:51:34.101550 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 20:51:34.101574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 20:51:34.101589 systemd-journald[200]: Journal started Jun 25 20:51:34.101614 systemd-journald[200]: Runtime Journal (/run/log/journal/9e8b18b63d1e4828832779306beb6e21) is 4.7M, max 38.0M, 33.2M free. Jun 25 20:51:34.074946 systemd-modules-load[201]: Inserted module 'overlay' Jun 25 20:51:34.168783 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 20:51:34.168847 kernel: Bridge firewalling registered Jun 25 20:51:34.168872 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 20:51:34.116728 systemd-modules-load[201]: Inserted module 'br_netfilter' Jun 25 20:51:34.172429 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 20:51:34.173476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 20:51:34.186039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 20:51:34.191006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 20:51:34.204188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 20:51:34.206467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 20:51:34.210684 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 20:51:34.212726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 20:51:34.218022 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 20:51:34.222677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 20:51:34.232937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 20:51:34.243584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 20:51:34.248177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 20:51:34.257069 dracut-cmdline[232]: dracut-dracut-053 Jun 25 20:51:34.261838 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 20:51:34.292964 systemd-resolved[237]: Positive Trust Anchors: Jun 25 20:51:34.293987 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 20:51:34.294032 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 20:51:34.302375 systemd-resolved[237]: Defaulting to hostname 'linux'. Jun 25 20:51:34.305660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 20:51:34.306594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 20:51:34.366890 kernel: SCSI subsystem initialized Jun 25 20:51:34.380843 kernel: Loading iSCSI transport class v2.0-870. Jun 25 20:51:34.395829 kernel: iscsi: registered transport (tcp) Jun 25 20:51:34.427175 kernel: iscsi: registered transport (qla4xxx) Jun 25 20:51:34.427263 kernel: QLogic iSCSI HBA Driver Jun 25 20:51:34.483237 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 20:51:34.488992 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 20:51:34.532573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 20:51:34.532650 kernel: device-mapper: uevent: version 1.0.3 Jun 25 20:51:34.532670 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 20:51:34.586886 kernel: raid6: sse2x4 gen() 13836 MB/s Jun 25 20:51:34.604877 kernel: raid6: sse2x2 gen() 9451 MB/s Jun 25 20:51:34.623459 kernel: raid6: sse2x1 gen() 9840 MB/s Jun 25 20:51:34.623506 kernel: raid6: using algorithm sse2x4 gen() 13836 MB/s Jun 25 20:51:34.642624 kernel: raid6: .... xor() 7515 MB/s, rmw enabled Jun 25 20:51:34.642684 kernel: raid6: using ssse3x2 recovery algorithm Jun 25 20:51:34.673831 kernel: xor: automatically using best checksumming function avx Jun 25 20:51:34.893838 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 20:51:34.908860 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 20:51:34.914018 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 20:51:34.942137 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jun 25 20:51:34.949238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 20:51:34.958259 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 20:51:34.979782 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jun 25 20:51:35.020416 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 20:51:35.026002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 20:51:35.140904 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 20:51:35.150239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 20:51:35.175611 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 20:51:35.178378 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 20:51:35.179967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 20:51:35.182162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 20:51:35.191420 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 20:51:35.226347 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 20:51:35.268934 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 20:51:35.280826 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jun 25 20:51:35.350929 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 25 20:51:35.351129 kernel: AVX version of gcm_enc/dec engaged. Jun 25 20:51:35.351152 kernel: AES CTR mode by8 optimization enabled Jun 25 20:51:35.351182 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 20:51:35.351201 kernel: GPT:17805311 != 125829119 Jun 25 20:51:35.351218 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 20:51:35.351235 kernel: GPT:17805311 != 125829119 Jun 25 20:51:35.351252 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 20:51:35.351269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 20:51:35.311447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 20:51:35.311613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 20:51:35.313226 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 20:51:35.314008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 20:51:35.527930 kernel: ACPI: bus type USB registered Jun 25 20:51:35.527975 kernel: usbcore: registered new interface driver usbfs Jun 25 20:51:35.527994 kernel: usbcore: registered new interface driver hub Jun 25 20:51:35.528012 kernel: usbcore: registered new device driver usb Jun 25 20:51:35.528030 kernel: libata version 3.00 loaded. Jun 25 20:51:35.528047 kernel: ahci 0000:00:1f.2: version 3.0 Jun 25 20:51:35.528352 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Jun 25 20:51:35.528373 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 25 20:51:35.528400 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jun 25 20:51:35.528610 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 25 20:51:35.528826 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jun 25 20:51:35.529035 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jun 25 20:51:35.529233 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jun 25 20:51:35.529435 kernel: scsi host0: ahci Jun 25 20:51:35.529639 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jun 25 20:51:35.530637 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jun 25 20:51:35.530866 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jun 25 20:51:35.531067 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (468) Jun 25 20:51:35.531088 kernel: scsi host1: ahci Jun 25 20:51:35.531279 kernel: scsi host2: ahci Jun 25 20:51:35.531486 kernel: hub 1-0:1.0: USB hub found Jun 25 20:51:35.531711 kernel: hub 1-0:1.0: 4 ports detected Jun 25 20:51:35.531958 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 25 20:51:35.532232 kernel: hub 2-0:1.0: USB hub found Jun 25 20:51:35.532491 kernel: hub 2-0:1.0: 4 ports detected Jun 25 20:51:35.532699 kernel: scsi host3: ahci Jun 25 20:51:35.532927 kernel: scsi host4: ahci Jun 25 20:51:35.533115 kernel: scsi host5: ahci Jun 25 20:51:35.533310 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jun 25 20:51:35.533337 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jun 25 20:51:35.533356 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jun 25 20:51:35.533373 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jun 25 20:51:35.533391 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jun 25 20:51:35.533412 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jun 25 20:51:35.314190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 20:51:35.316411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 20:51:35.323110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 20:51:35.423477 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 20:51:35.534388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 20:51:35.547761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 20:51:35.560420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 20:51:35.566590 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 20:51:35.567640 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 20:51:35.580089 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 20:51:35.584975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 20:51:35.586898 disk-uuid[563]: Primary Header is updated. Jun 25 20:51:35.586898 disk-uuid[563]: Secondary Entries is updated. Jun 25 20:51:35.586898 disk-uuid[563]: Secondary Header is updated. Jun 25 20:51:35.593706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 20:51:35.598821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 20:51:35.606825 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 20:51:35.612079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 20:51:35.700034 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jun 25 20:51:35.785479 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 25 20:51:35.785546 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jun 25 20:51:35.785833 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 25 20:51:35.792503 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 25 20:51:35.792538 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 25 20:51:35.794253 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 25 20:51:35.844847 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 20:51:35.854373 kernel: usbcore: registered new interface driver usbhid Jun 25 20:51:35.854419 kernel: usbhid: USB HID core driver Jun 25 20:51:35.862856 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jun 25 20:51:35.862911 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jun 25 20:51:36.607865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 20:51:36.609846 disk-uuid[564]: The operation has completed successfully. Jun 25 20:51:36.668463 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 20:51:36.668618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 20:51:36.689005 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 20:51:36.694162 sh[586]: Success Jun 25 20:51:36.711843 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jun 25 20:51:36.770292 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 20:51:36.784891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 20:51:36.788276 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 20:51:36.825840 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 20:51:36.825897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 20:51:36.825917 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 20:51:36.828729 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 20:51:36.828760 kernel: BTRFS info (device dm-0): using free space tree Jun 25 20:51:36.839397 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 20:51:36.840834 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 20:51:36.845995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 20:51:36.848971 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 20:51:36.863370 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 20:51:36.863432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 20:51:36.866354 kernel: BTRFS info (device vda6): using free space tree Jun 25 20:51:36.871847 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 20:51:36.887244 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 20:51:36.888368 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 20:51:36.896044 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 20:51:36.904069 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 20:51:37.005979 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 20:51:37.026950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 20:51:37.049189 ignition[682]: Ignition 2.19.0 Jun 25 20:51:37.049214 ignition[682]: Stage: fetch-offline Jun 25 20:51:37.051732 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 20:51:37.049333 ignition[682]: no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:37.049354 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:37.049594 ignition[682]: parsed url from cmdline: "" Jun 25 20:51:37.049605 ignition[682]: no config URL provided Jun 25 20:51:37.049616 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 20:51:37.049633 ignition[682]: no config at "/usr/lib/ignition/user.ign" Jun 25 20:51:37.049654 ignition[682]: failed to fetch config: resource requires networking Jun 25 20:51:37.049981 ignition[682]: Ignition finished successfully Jun 25 20:51:37.067662 systemd-networkd[770]: lo: Link UP Jun 25 20:51:37.067679 systemd-networkd[770]: lo: Gained carrier Jun 25 20:51:37.070408 systemd-networkd[770]: Enumeration completed Jun 25 20:51:37.071328 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 20:51:37.071340 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 20:51:37.071351 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 20:51:37.072262 systemd[1]: Reached target network.target - Network. Jun 25 20:51:37.073527 systemd-networkd[770]: eth0: Link UP Jun 25 20:51:37.073533 systemd-networkd[770]: eth0: Gained carrier Jun 25 20:51:37.073545 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 20:51:37.089071 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 20:51:37.103887 systemd-networkd[770]: eth0: DHCPv4 address 10.230.13.114/30, gateway 10.230.13.113 acquired from 10.230.13.113 Jun 25 20:51:37.109843 ignition[778]: Ignition 2.19.0 Jun 25 20:51:37.109860 ignition[778]: Stage: fetch Jun 25 20:51:37.110095 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:37.110115 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:37.110252 ignition[778]: parsed url from cmdline: "" Jun 25 20:51:37.110259 ignition[778]: no config URL provided Jun 25 20:51:37.110270 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 20:51:37.110285 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jun 25 20:51:37.110505 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 25 20:51:37.111503 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 25 20:51:37.111532 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 25 20:51:37.130391 ignition[778]: GET result: OK Jun 25 20:51:37.131157 ignition[778]: parsing config with SHA512: fa22c7577080663b1fb75d50740a69c5e1047a91d57d1e26827d1651cf9c131b136e63a0870a865f67361b749bfc39ccb4875872ba9abd812cfdd71fc99f36b9 Jun 25 20:51:37.137353 unknown[778]: fetched base config from "system" Jun 25 20:51:37.137381 unknown[778]: fetched base config from "system" Jun 25 20:51:37.137858 ignition[778]: fetch: fetch complete Jun 25 20:51:37.137394 unknown[778]: fetched user config from "openstack" Jun 25 20:51:37.137868 ignition[778]: fetch: fetch passed Jun 25 20:51:37.137936 ignition[778]: Ignition finished successfully Jun 25 20:51:37.141884 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 20:51:37.156034 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 20:51:37.176861 ignition[786]: Ignition 2.19.0 Jun 25 20:51:37.176880 ignition[786]: Stage: kargs Jun 25 20:51:37.177150 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:37.177172 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:37.178425 ignition[786]: kargs: kargs passed Jun 25 20:51:37.181878 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 20:51:37.178499 ignition[786]: Ignition finished successfully Jun 25 20:51:37.193051 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 20:51:37.211209 ignition[794]: Ignition 2.19.0 Jun 25 20:51:37.211235 ignition[794]: Stage: disks Jun 25 20:51:37.211554 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:37.211575 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:37.213976 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 20:51:37.212790 ignition[794]: disks: disks passed Jun 25 20:51:37.216248 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 20:51:37.212878 ignition[794]: Ignition finished successfully Jun 25 20:51:37.217129 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 20:51:37.218504 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 20:51:37.220141 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 20:51:37.221496 systemd[1]: Reached target basic.target - Basic System. Jun 25 20:51:37.231021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 20:51:37.250261 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 20:51:37.255021 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 20:51:37.262958 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 20:51:37.396823 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 20:51:37.397478 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 20:51:37.399028 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 20:51:37.404956 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 20:51:37.422157 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 20:51:37.423342 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 20:51:37.427585 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 25 20:51:37.428471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 20:51:37.428514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 20:51:37.433419 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 20:51:37.442849 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jun 25 20:51:37.448138 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 20:51:37.448192 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 20:51:37.450767 kernel: BTRFS info (device vda6): using free space tree Jun 25 20:51:37.452065 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 20:51:37.465214 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 20:51:37.468777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 20:51:37.529522 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 20:51:37.538583 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jun 25 20:51:37.544627 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 20:51:37.553943 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 20:51:37.653909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 20:51:37.659927 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 20:51:37.662016 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 20:51:37.676855 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 20:51:37.707859 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 20:51:37.711313 ignition[932]: INFO : Ignition 2.19.0 Jun 25 20:51:37.711313 ignition[932]: INFO : Stage: mount Jun 25 20:51:37.712952 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:37.712952 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:37.714784 ignition[932]: INFO : mount: mount passed Jun 25 20:51:37.714784 ignition[932]: INFO : Ignition finished successfully Jun 25 20:51:37.714331 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 20:51:37.821385 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 20:51:38.279144 systemd-networkd[770]: eth0: Gained IPv6LL Jun 25 20:51:39.786435 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:835c:24:19ff:fee6:d72/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:835c:24:19ff:fee6:d72/64 assigned by NDisc. Jun 25 20:51:39.786452 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jun 25 20:51:44.613182 coreos-metadata[813]: Jun 25 20:51:44.613 WARN failed to locate config-drive, using the metadata service API instead Jun 25 20:51:44.637271 coreos-metadata[813]: Jun 25 20:51:44.637 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 20:51:44.690191 coreos-metadata[813]: Jun 25 20:51:44.690 INFO Fetch successful Jun 25 20:51:44.691248 coreos-metadata[813]: Jun 25 20:51:44.690 INFO wrote hostname srv-azn0z.gb1.brightbox.com to /sysroot/etc/hostname Jun 25 20:51:44.692677 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 25 20:51:44.692859 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 25 20:51:44.706940 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 20:51:44.715520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 20:51:44.732839 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (951) Jun 25 20:51:44.736325 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 20:51:44.736371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 20:51:44.738267 kernel: BTRFS info (device vda6): using free space tree Jun 25 20:51:44.743821 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 20:51:44.746445 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 20:51:44.774304 ignition[969]: INFO : Ignition 2.19.0 Jun 25 20:51:44.774304 ignition[969]: INFO : Stage: files Jun 25 20:51:44.776099 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:44.776099 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:44.776099 ignition[969]: DEBUG : files: compiled without relabeling support, skipping Jun 25 20:51:44.778959 ignition[969]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 20:51:44.778959 ignition[969]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 20:51:44.781053 ignition[969]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 20:51:44.782437 ignition[969]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 20:51:44.783487 ignition[969]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 20:51:44.782998 unknown[969]: wrote ssh authorized keys file for user: core Jun 25 20:51:44.785461 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 20:51:44.785461 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 20:51:45.447521 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 20:51:45.664200 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 20:51:45.664200 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 20:51:45.664200 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 25 20:51:46.285612 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 20:51:46.602570 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 20:51:46.602570 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 20:51:46.610981 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 20:51:47.097579 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 20:51:48.276487 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 20:51:48.276487 ignition[969]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 20:51:48.280243 ignition[969]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 20:51:48.280243 ignition[969]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 20:51:48.280243 ignition[969]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 20:51:48.280243 ignition[969]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 25 20:51:48.280243 ignition[969]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 20:51:48.280243 ignition[969]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 20:51:48.280243 ignition[969]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 20:51:48.280243 ignition[969]: INFO : files: files passed Jun 25 20:51:48.280243 ignition[969]: INFO : Ignition finished successfully Jun 25 20:51:48.281414 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 20:51:48.293723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 20:51:48.298531 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 20:51:48.300273 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 20:51:48.300450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 20:51:48.315247 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 20:51:48.315247 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 20:51:48.318202 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 20:51:48.320801 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 20:51:48.322832 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 20:51:48.332103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 20:51:48.374425 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 20:51:48.374614 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 20:51:48.376698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 20:51:48.378077 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 20:51:48.379650 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 20:51:48.387004 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 20:51:48.403867 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 20:51:48.412075 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 20:51:48.437766 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 20:51:48.438788 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 20:51:48.440544 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 20:51:48.442086 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 20:51:48.442257 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 20:51:48.444151 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 20:51:48.445164 systemd[1]: Stopped target basic.target - Basic System. Jun 25 20:51:48.446650 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 20:51:48.448083 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 20:51:48.449495 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 20:51:48.451158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 20:51:48.452788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 20:51:48.454479 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 20:51:48.456014 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 20:51:48.457583 systemd[1]: Stopped target swap.target - Swaps. Jun 25 20:51:48.458991 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 20:51:48.459184 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 20:51:48.460901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 20:51:48.461870 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 20:51:48.463289 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 20:51:48.463465 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 20:51:48.464907 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 20:51:48.465085 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 20:51:48.467085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 20:51:48.467252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 20:51:48.468179 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 20:51:48.468345 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 20:51:48.476145 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 20:51:48.480110 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 20:51:48.480870 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 20:51:48.481120 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 20:51:48.483453 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 20:51:48.483956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 20:51:48.498142 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 20:51:48.498279 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 20:51:48.514821 ignition[1022]: INFO : Ignition 2.19.0 Jun 25 20:51:48.514821 ignition[1022]: INFO : Stage: umount Jun 25 20:51:48.518525 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 20:51:48.518525 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 20:51:48.518525 ignition[1022]: INFO : umount: umount passed Jun 25 20:51:48.518525 ignition[1022]: INFO : Ignition finished successfully Jun 25 20:51:48.518959 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 20:51:48.521493 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 20:51:48.521673 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 20:51:48.523139 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 20:51:48.523260 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 20:51:48.524894 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 20:51:48.524976 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 20:51:48.526397 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 20:51:48.526471 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 20:51:48.527858 systemd[1]: Stopped target network.target - Network. Jun 25 20:51:48.529186 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 20:51:48.529266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 20:51:48.530730 systemd[1]: Stopped target paths.target - Path Units. Jun 25 20:51:48.532161 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 20:51:48.535892 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 20:51:48.536862 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 20:51:48.538233 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 20:51:48.539781 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 20:51:48.539885 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 20:51:48.541311 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 20:51:48.541373 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 20:51:48.542749 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 20:51:48.542841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 20:51:48.544320 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 20:51:48.544401 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 20:51:48.546307 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 20:51:48.548400 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 20:51:48.550169 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 20:51:48.550314 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 20:51:48.551009 systemd-networkd[770]: eth0: DHCPv6 lease lost Jun 25 20:51:48.553948 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 20:51:48.554116 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 20:51:48.559203 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 20:51:48.559378 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 20:51:48.561515 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 20:51:48.561679 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 20:51:48.566629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 20:51:48.567127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 20:51:48.581433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 20:51:48.582169 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 20:51:48.582240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 20:51:48.583281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 20:51:48.583355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 20:51:48.585097 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 20:51:48.585183 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 20:51:48.586690 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 20:51:48.586764 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 20:51:48.588676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 20:51:48.599319 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 20:51:48.599556 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 20:51:48.602469 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 20:51:48.602557 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 20:51:48.604294 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 20:51:48.604351 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 20:51:48.605875 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 20:51:48.605937 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 20:51:48.609531 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 20:51:48.609600 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 20:51:48.611104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 20:51:48.611180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 20:51:48.618037 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 20:51:48.619880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 20:51:48.619950 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 20:51:48.621844 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 20:51:48.621915 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 20:51:48.622693 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 20:51:48.622754 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 20:51:48.623622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 20:51:48.623688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 20:51:48.625037 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 20:51:48.625189 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 20:51:48.633081 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 20:51:48.633232 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 20:51:48.634791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 20:51:48.643371 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 20:51:48.652849 systemd[1]: Switching root. Jun 25 20:51:48.689688 systemd-journald[200]: Journal stopped Jun 25 20:51:50.298455 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Jun 25 20:51:50.298589 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 20:51:50.298627 kernel: SELinux: policy capability open_perms=1 Jun 25 20:51:50.298646 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 20:51:50.298684 kernel: SELinux: policy capability always_check_network=0 Jun 25 20:51:50.298705 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 20:51:50.301832 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 20:51:50.301878 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 20:51:50.301900 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 20:51:50.301930 kernel: audit: type=1403 audit(1719348709.108:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 20:51:50.301961 systemd[1]: Successfully loaded SELinux policy in 47.963ms. Jun 25 20:51:50.302009 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.548ms. Jun 25 20:51:50.302045 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 20:51:50.302068 systemd[1]: Detected virtualization kvm. Jun 25 20:51:50.302088 systemd[1]: Detected architecture x86-64. Jun 25 20:51:50.302123 systemd[1]: Detected first boot. Jun 25 20:51:50.302146 systemd[1]: Hostname set to . Jun 25 20:51:50.302167 systemd[1]: Initializing machine ID from VM UUID. Jun 25 20:51:50.302187 zram_generator::config[1065]: No configuration found. Jun 25 20:51:50.302214 systemd[1]: Populated /etc with preset unit settings. Jun 25 20:51:50.302236 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 20:51:50.302256 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 20:51:50.302282 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 20:51:50.302320 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 20:51:50.302343 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 20:51:50.302374 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 20:51:50.302395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 20:51:50.302457 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 20:51:50.302480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 20:51:50.302511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 20:51:50.302529 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 20:51:50.302547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 20:51:50.302597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 20:51:50.302625 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 20:51:50.302645 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 20:51:50.302671 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 20:51:50.302690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 20:51:50.302723 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 20:51:50.302742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 20:51:50.302773 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 20:51:50.302858 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 20:51:50.302903 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 20:51:50.302925 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 20:51:50.302946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 20:51:50.302966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 20:51:50.302986 systemd[1]: Reached target slices.target - Slice Units. Jun 25 20:51:50.303031 systemd[1]: Reached target swap.target - Swaps. Jun 25 20:51:50.303075 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 20:51:50.303097 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 20:51:50.303119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 20:51:50.303139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 20:51:50.303159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 20:51:50.303180 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 20:51:50.303213 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 20:51:50.303236 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 20:51:50.303262 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 20:51:50.303284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:50.303304 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 20:51:50.303334 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 20:51:50.303355 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 20:51:50.303378 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 20:51:50.303405 systemd[1]: Reached target machines.target - Containers. Jun 25 20:51:50.303438 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 20:51:50.303468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 20:51:50.303502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 20:51:50.303522 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 20:51:50.303549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 20:51:50.303570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 20:51:50.303597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 20:51:50.303618 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 20:51:50.303649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 20:51:50.303671 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 20:51:50.303694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 20:51:50.303715 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 20:51:50.303734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 20:51:50.303758 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 20:51:50.303778 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 20:51:50.305725 kernel: fuse: init (API version 7.39) Jun 25 20:51:50.305763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 20:51:50.305816 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 20:51:50.305851 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 20:51:50.305879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 20:51:50.305905 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 20:51:50.305924 systemd[1]: Stopped verity-setup.service. Jun 25 20:51:50.305942 kernel: loop: module loaded Jun 25 20:51:50.305960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:50.305978 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 20:51:50.306009 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 20:51:50.306058 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 20:51:50.306080 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 20:51:50.306102 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 20:51:50.306123 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 20:51:50.306184 systemd-journald[1153]: Collecting audit messages is disabled. Jun 25 20:51:50.306233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 20:51:50.306256 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 20:51:50.306291 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 20:51:50.306315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 20:51:50.306337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 20:51:50.306369 systemd-journald[1153]: Journal started Jun 25 20:51:50.306403 systemd-journald[1153]: Runtime Journal (/run/log/journal/9e8b18b63d1e4828832779306beb6e21) is 4.7M, max 38.0M, 33.2M free. Jun 25 20:51:49.903092 systemd[1]: Queued start job for default target multi-user.target. Jun 25 20:51:49.924257 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 20:51:49.924997 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 20:51:50.309877 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 20:51:50.314089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 20:51:50.314319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 20:51:50.315935 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 20:51:50.317055 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 20:51:50.318384 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 20:51:50.319110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 20:51:50.322723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 20:51:50.323883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 20:51:50.329005 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 20:51:50.346365 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 20:51:50.373830 kernel: ACPI: bus type drm_connector registered Jun 25 20:51:50.379046 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 20:51:50.387256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 20:51:50.388761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 20:51:50.388947 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 20:51:50.391190 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 20:51:50.396041 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 20:51:50.406615 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 20:51:50.407947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 20:51:50.417990 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 20:51:50.423123 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 20:51:50.424998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 20:51:50.431173 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 20:51:50.433059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 20:51:50.442390 systemd-journald[1153]: Time spent on flushing to /var/log/journal/9e8b18b63d1e4828832779306beb6e21 is 117.987ms for 1135 entries. Jun 25 20:51:50.442390 systemd-journald[1153]: System Journal (/var/log/journal/9e8b18b63d1e4828832779306beb6e21) is 8.0M, max 584.8M, 576.8M free. Jun 25 20:51:50.617220 systemd-journald[1153]: Received client request to flush runtime journal. Jun 25 20:51:50.617294 kernel: loop0: detected capacity change from 0 to 211296 Jun 25 20:51:50.617351 kernel: block loop0: the capability attribute has been deprecated. Jun 25 20:51:50.617481 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 20:51:50.444031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 20:51:50.459991 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 20:51:50.467069 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 20:51:50.472418 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 20:51:50.473655 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 20:51:50.473944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 20:51:50.475001 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 20:51:50.475958 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 20:51:50.479088 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 20:51:50.487971 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 20:51:50.495545 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 20:51:50.507405 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 20:51:50.591911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 20:51:50.603690 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 20:51:50.607361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 20:51:50.608367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 20:51:50.610732 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 20:51:50.619292 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 20:51:50.627845 kernel: loop1: detected capacity change from 0 to 80568 Jun 25 20:51:50.635419 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jun 25 20:51:50.636343 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jun 25 20:51:50.648982 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 20:51:50.669014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 20:51:50.681228 kernel: loop2: detected capacity change from 0 to 8 Jun 25 20:51:50.685275 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 20:51:50.737835 kernel: loop3: detected capacity change from 0 to 139760 Jun 25 20:51:50.764261 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 20:51:50.788133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 20:51:50.803955 kernel: loop4: detected capacity change from 0 to 211296 Jun 25 20:51:50.822986 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jun 25 20:51:50.823023 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jun 25 20:51:50.831895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 20:51:50.839835 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 20:51:50.870262 kernel: loop6: detected capacity change from 0 to 8 Jun 25 20:51:50.878030 kernel: loop7: detected capacity change from 0 to 139760 Jun 25 20:51:50.916730 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 25 20:51:50.919882 (sd-merge)[1224]: Merged extensions into '/usr'. Jun 25 20:51:50.929381 systemd[1]: Reloading requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 20:51:50.929401 systemd[1]: Reloading... Jun 25 20:51:51.038539 zram_generator::config[1250]: No configuration found. Jun 25 20:51:51.266359 ldconfig[1190]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 20:51:51.307282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 20:51:51.377164 systemd[1]: Reloading finished in 446 ms. Jun 25 20:51:51.418744 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 20:51:51.421269 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 20:51:51.431102 systemd[1]: Starting ensure-sysext.service... Jun 25 20:51:51.434083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 20:51:51.460027 systemd[1]: Reloading requested from client PID 1306 ('systemctl') (unit ensure-sysext.service)... Jun 25 20:51:51.460065 systemd[1]: Reloading... Jun 25 20:51:51.467066 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 20:51:51.467671 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 20:51:51.469130 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 20:51:51.469526 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jun 25 20:51:51.469627 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jun 25 20:51:51.474378 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 20:51:51.474395 systemd-tmpfiles[1307]: Skipping /boot Jun 25 20:51:51.488698 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 20:51:51.488718 systemd-tmpfiles[1307]: Skipping /boot Jun 25 20:51:51.562857 zram_generator::config[1335]: No configuration found. Jun 25 20:51:51.733905 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 20:51:51.805248 systemd[1]: Reloading finished in 344 ms. Jun 25 20:51:51.833022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 20:51:51.840464 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 20:51:51.853036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 20:51:51.863030 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 20:51:51.867028 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 20:51:51.878902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 20:51:51.883169 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 20:51:51.887114 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 20:51:51.894525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:51.895446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 20:51:51.904152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 20:51:51.913114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 20:51:51.916169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 20:51:51.917076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 20:51:51.917225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:51.930399 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 20:51:51.933188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:51.933463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 20:51:51.933721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 20:51:51.934861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:51.938383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:51.938686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 20:51:51.946102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 20:51:51.947331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 20:51:51.947556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 20:51:51.954469 systemd[1]: Finished ensure-sysext.service. Jun 25 20:51:51.969143 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 20:51:51.970593 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 20:51:51.971952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 20:51:51.972181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 20:51:51.975893 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 20:51:52.004719 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 20:51:52.011639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 20:51:52.012917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 20:51:52.014428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 20:51:52.018310 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 20:51:52.018563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 20:51:52.020441 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 20:51:52.022001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 20:51:52.025093 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 20:51:52.048506 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Jun 25 20:51:52.055673 augenrules[1425]: No rules Jun 25 20:51:52.056062 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 20:51:52.058039 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 20:51:52.065915 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 20:51:52.069447 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 20:51:52.071906 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 20:51:52.096593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 20:51:52.109064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 20:51:52.189744 systemd-resolved[1394]: Positive Trust Anchors: Jun 25 20:51:52.189770 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 20:51:52.189835 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 20:51:52.196735 systemd-resolved[1394]: Using system hostname 'srv-azn0z.gb1.brightbox.com'. Jun 25 20:51:52.199366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 20:51:52.200362 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 20:51:52.241479 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 20:51:52.242618 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 20:51:52.269310 systemd-networkd[1442]: lo: Link UP Jun 25 20:51:52.269323 systemd-networkd[1442]: lo: Gained carrier Jun 25 20:51:52.270285 systemd-networkd[1442]: Enumeration completed Jun 25 20:51:52.270427 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 20:51:52.272293 systemd[1]: Reached target network.target - Network. Jun 25 20:51:52.280276 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 20:51:52.287492 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 20:51:52.335934 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1452) Jun 25 20:51:52.375856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1447) Jun 25 20:51:52.423848 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 20:51:52.427839 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 20:51:52.430692 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 20:51:52.430706 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 20:51:52.433931 systemd-networkd[1442]: eth0: Link UP Jun 25 20:51:52.433944 systemd-networkd[1442]: eth0: Gained carrier Jun 25 20:51:52.433975 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 20:51:52.445968 systemd-networkd[1442]: eth0: DHCPv4 address 10.230.13.114/30, gateway 10.230.13.113 acquired from 10.230.13.113 Jun 25 20:51:52.446887 kernel: ACPI: button: Power Button [PWRF] Jun 25 20:51:52.448530 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jun 25 20:51:52.461700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 20:51:52.469050 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 20:51:52.498834 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 25 20:51:52.508967 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jun 25 20:51:52.509237 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 25 20:51:52.506011 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 20:51:52.521830 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jun 25 20:51:52.592165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 20:51:52.778787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 20:51:52.795387 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 20:51:52.802067 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 20:51:52.831664 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 20:51:52.872634 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 20:51:52.874545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 20:51:52.875341 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 20:51:52.876241 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 20:51:52.877278 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 20:51:52.878453 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 20:51:52.879387 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 20:51:52.880201 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 20:51:52.881016 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 20:51:52.881071 systemd[1]: Reached target paths.target - Path Units. Jun 25 20:51:52.881722 systemd[1]: Reached target timers.target - Timer Units. Jun 25 20:51:52.884027 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 20:51:52.886553 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 20:51:52.892916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 20:51:52.895453 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 20:51:52.896993 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 20:51:52.897866 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 20:51:52.898542 systemd[1]: Reached target basic.target - Basic System. Jun 25 20:51:52.899284 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 20:51:52.899335 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 20:51:52.907605 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 20:51:52.913010 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 20:51:52.914712 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 20:51:52.924106 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 20:51:52.929977 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 20:51:52.933316 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 20:51:52.934910 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 20:51:52.938069 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 20:51:52.949667 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 20:51:52.953071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 20:51:52.962538 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 20:51:52.979568 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 20:51:52.982077 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 20:51:52.983553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 20:51:52.991028 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 20:51:53.008350 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 20:51:53.012422 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 20:51:53.023635 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 20:51:53.024593 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 20:51:53.034415 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 20:51:53.045307 update_engine[1497]: I0625 20:51:53.042116 1497 main.cc:92] Flatcar Update Engine starting Jun 25 20:51:53.034900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 20:51:53.052098 jq[1485]: false Jun 25 20:51:53.053164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 20:51:53.053480 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 20:51:53.076085 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 20:51:53.075228 dbus-daemon[1484]: [system] SELinux support is enabled Jun 25 20:51:53.083617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 20:51:53.084893 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 20:51:53.085761 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 20:51:53.086249 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 20:51:53.103768 dbus-daemon[1484]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1442 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 20:51:53.107612 jq[1501]: true Jun 25 20:51:53.134163 update_engine[1497]: I0625 20:51:53.125524 1497 update_check_scheduler.cc:74] Next update check in 7m23s Jun 25 20:51:53.134239 extend-filesystems[1486]: Found loop4 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found loop5 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found loop6 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found loop7 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda1 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda2 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda3 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found usr Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda4 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda6 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda7 Jun 25 20:51:53.134239 extend-filesystems[1486]: Found vda9 Jun 25 20:51:53.134239 extend-filesystems[1486]: Checking size of /dev/vda9 Jun 25 20:51:53.223942 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jun 25 20:51:53.111610 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 20:51:53.224434 tar[1505]: linux-amd64/helm Jun 25 20:51:53.224772 extend-filesystems[1486]: Resized partition /dev/vda9 Jun 25 20:51:53.133047 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 20:51:53.243226 extend-filesystems[1525]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 20:51:53.144511 systemd[1]: Started update-engine.service - Update Engine. Jun 25 20:51:53.254376 jq[1519]: true Jun 25 20:51:53.159438 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 20:51:53.307842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1452) Jun 25 20:51:53.313624 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (Power Button) Jun 25 20:51:53.315175 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 20:51:53.316354 systemd-logind[1493]: New seat seat0. Jun 25 20:51:53.325956 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 20:51:53.421268 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jun 25 20:51:53.420377 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 20:51:53.423110 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 20:51:53.435242 systemd[1]: Starting sshkeys.service... Jun 25 20:51:53.475468 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 20:51:53.476760 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 20:51:53.489225 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 20:51:53.490130 dbus-daemon[1484]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1517 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 20:51:53.491461 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 20:51:53.507233 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 20:51:53.553191 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 25 20:51:53.556728 polkitd[1554]: Started polkitd version 121 Jun 25 20:51:53.570410 polkitd[1554]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 20:51:53.570516 polkitd[1554]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 20:51:53.587398 extend-filesystems[1525]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 20:51:53.587398 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 25 20:51:53.587398 extend-filesystems[1525]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 25 20:51:53.595265 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Jun 25 20:51:53.594791 polkitd[1554]: Finished loading, compiling and executing 2 rules Jun 25 20:51:53.588248 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 20:51:53.588554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 20:51:53.599487 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 20:51:53.600217 polkitd[1554]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 20:51:53.600545 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 20:51:53.625586 systemd-hostnamed[1517]: Hostname set to (static) Jun 25 20:51:53.730156 containerd[1514]: time="2024-06-25T20:51:53.728341592Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 20:51:53.761831 containerd[1514]: time="2024-06-25T20:51:53.761104442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 20:51:53.761831 containerd[1514]: time="2024-06-25T20:51:53.761158346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.763698477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.763836707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.764213867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.764355848Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.764483202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.764651967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.764686168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.764933437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.765448153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.765486615Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 20:51:53.765831 containerd[1514]: time="2024-06-25T20:51:53.765503285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 20:51:53.766311 containerd[1514]: time="2024-06-25T20:51:53.765626394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 20:51:53.766311 containerd[1514]: time="2024-06-25T20:51:53.765650286Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 20:51:53.766311 containerd[1514]: time="2024-06-25T20:51:53.765734846Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 20:51:53.766311 containerd[1514]: time="2024-06-25T20:51:53.765769035Z" level=info msg="metadata content store policy set" policy=shared Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771731269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771779069Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771820621Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771884273Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771912238Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771948078Z" level=info msg="NRI interface is disabled by configuration." Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.771970254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772133275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772159829Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772187594Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772215017Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772237491Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772270539Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.772811 containerd[1514]: time="2024-06-25T20:51:53.772299224Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772327409Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772350903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772372212Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772402064Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772421511Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772584909Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772875650Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772912400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772953380Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.772992866Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.773095551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.773122221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.773142239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773289 containerd[1514]: time="2024-06-25T20:51:53.773160287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773183807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773203181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773221836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773239927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773259272Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773537031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773564262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773586054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773606984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773626175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773647201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773678321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.773773 containerd[1514]: time="2024-06-25T20:51:53.773695775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 20:51:53.777825 containerd[1514]: time="2024-06-25T20:51:53.776473955Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 20:51:53.777825 containerd[1514]: time="2024-06-25T20:51:53.776567174Z" level=info msg="Connect containerd service" Jun 25 20:51:53.777825 containerd[1514]: time="2024-06-25T20:51:53.776620386Z" level=info msg="using legacy CRI server" Jun 25 20:51:53.777825 containerd[1514]: time="2024-06-25T20:51:53.776636997Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 20:51:53.777825 containerd[1514]: time="2024-06-25T20:51:53.776762568Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 20:51:53.779775 containerd[1514]: time="2024-06-25T20:51:53.779702226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 20:51:53.780818 containerd[1514]: time="2024-06-25T20:51:53.779786230Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 20:51:53.780818 containerd[1514]: time="2024-06-25T20:51:53.780512302Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 20:51:53.780818 containerd[1514]: time="2024-06-25T20:51:53.780537640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 20:51:53.780818 containerd[1514]: time="2024-06-25T20:51:53.780560677Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.779939733Z" level=info msg="Start subscribing containerd event" Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781126375Z" level=info msg="Start recovering state" Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781242673Z" level=info msg="Start event monitor" Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781272373Z" level=info msg="Start snapshots syncer" Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781295769Z" level=info msg="Start cni network conf syncer for default" Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781310410Z" level=info msg="Start streaming server" Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781252735Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781634254Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 20:51:53.785268 containerd[1514]: time="2024-06-25T20:51:53.781746197Z" level=info msg="containerd successfully booted in 0.057663s" Jun 25 20:51:53.781887 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 20:51:53.789170 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 20:51:53.826262 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 20:51:53.835297 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 20:51:53.847768 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 20:51:53.848055 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 20:51:53.856219 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 20:51:53.886988 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 20:51:53.896339 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 20:51:53.905773 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 20:51:53.907617 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 20:51:54.104883 tar[1505]: linux-amd64/LICENSE Jun 25 20:51:54.104883 tar[1505]: linux-amd64/README.md Jun 25 20:51:54.118514 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 20:51:54.279225 systemd-networkd[1442]: eth0: Gained IPv6LL Jun 25 20:51:54.281153 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jun 25 20:51:54.283644 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 20:51:54.286996 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 20:51:54.294166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:51:54.299045 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 20:51:54.333118 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 20:51:55.172718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:51:55.183293 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 20:51:55.784825 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jun 25 20:51:55.786007 systemd-networkd[1442]: eth0: Ignoring DHCPv6 address 2a02:1348:179:835c:24:19ff:fee6:d72/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:835c:24:19ff:fee6:d72/64 assigned by NDisc. Jun 25 20:51:55.786019 systemd-networkd[1442]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jun 25 20:51:55.953670 kubelet[1608]: E0625 20:51:55.953511 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 20:51:55.955698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 20:51:55.955985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 20:51:55.956533 systemd[1]: kubelet.service: Consumed 1.098s CPU time. Jun 25 20:51:57.735793 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jun 25 20:51:58.205855 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 20:51:58.218378 systemd[1]: Started sshd@0-10.230.13.114:22-139.178.89.65:54424.service - OpenSSH per-connection server daemon (139.178.89.65:54424). Jun 25 20:51:58.967687 login[1585]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jun 25 20:51:58.970953 login[1586]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 20:51:58.988463 systemd-logind[1493]: New session 1 of user core. Jun 25 20:51:58.991498 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 20:51:59.000362 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 20:51:59.025721 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 20:51:59.033284 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 20:51:59.049422 (systemd)[1628]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:51:59.115101 sshd[1620]: Accepted publickey for core from 139.178.89.65 port 54424 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:51:59.117136 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:51:59.124681 systemd-logind[1493]: New session 3 of user core. Jun 25 20:51:59.191439 systemd[1628]: Queued start job for default target default.target. Jun 25 20:51:59.201563 systemd[1628]: Created slice app.slice - User Application Slice. Jun 25 20:51:59.201740 systemd[1628]: Reached target paths.target - Paths. Jun 25 20:51:59.201785 systemd[1628]: Reached target timers.target - Timers. Jun 25 20:51:59.203842 systemd[1628]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 20:51:59.219950 systemd[1628]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 20:51:59.220144 systemd[1628]: Reached target sockets.target - Sockets. Jun 25 20:51:59.220170 systemd[1628]: Reached target basic.target - Basic System. Jun 25 20:51:59.220253 systemd[1628]: Reached target default.target - Main User Target. Jun 25 20:51:59.220308 systemd[1628]: Startup finished in 161ms. Jun 25 20:51:59.220637 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 20:51:59.233116 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 20:51:59.234451 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 20:51:59.875236 systemd[1]: Started sshd@1-10.230.13.114:22-139.178.89.65:54436.service - OpenSSH per-connection server daemon (139.178.89.65:54436). Jun 25 20:51:59.968302 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 20:51:59.975880 systemd-logind[1493]: New session 2 of user core. Jun 25 20:51:59.987080 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 20:52:00.096958 coreos-metadata[1483]: Jun 25 20:52:00.096 WARN failed to locate config-drive, using the metadata service API instead Jun 25 20:52:00.128898 coreos-metadata[1483]: Jun 25 20:52:00.128 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 25 20:52:00.136615 coreos-metadata[1483]: Jun 25 20:52:00.136 INFO Fetch failed with 404: resource not found Jun 25 20:52:00.136701 coreos-metadata[1483]: Jun 25 20:52:00.136 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 20:52:00.137573 coreos-metadata[1483]: Jun 25 20:52:00.137 INFO Fetch successful Jun 25 20:52:00.137716 coreos-metadata[1483]: Jun 25 20:52:00.137 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 25 20:52:00.182949 coreos-metadata[1483]: Jun 25 20:52:00.182 INFO Fetch successful Jun 25 20:52:00.183070 coreos-metadata[1483]: Jun 25 20:52:00.183 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 25 20:52:00.234140 coreos-metadata[1483]: Jun 25 20:52:00.234 INFO Fetch successful Jun 25 20:52:00.234337 coreos-metadata[1483]: Jun 25 20:52:00.234 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 25 20:52:00.295726 coreos-metadata[1483]: Jun 25 20:52:00.295 INFO Fetch successful Jun 25 20:52:00.295874 coreos-metadata[1483]: Jun 25 20:52:00.295 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 25 20:52:00.350767 coreos-metadata[1483]: Jun 25 20:52:00.350 INFO Fetch successful Jun 25 20:52:00.383997 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 20:52:00.385241 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 20:52:00.622359 coreos-metadata[1552]: Jun 25 20:52:00.622 WARN failed to locate config-drive, using the metadata service API instead Jun 25 20:52:00.644652 coreos-metadata[1552]: Jun 25 20:52:00.644 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 25 20:52:00.706969 coreos-metadata[1552]: Jun 25 20:52:00.706 INFO Fetch successful Jun 25 20:52:00.707139 coreos-metadata[1552]: Jun 25 20:52:00.707 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 20:52:00.738519 sshd[1646]: Accepted publickey for core from 139.178.89.65 port 54436 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:52:00.740706 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:52:00.747492 systemd-logind[1493]: New session 4 of user core. Jun 25 20:52:00.753135 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 20:52:00.754827 coreos-metadata[1552]: Jun 25 20:52:00.754 INFO Fetch successful Jun 25 20:52:00.759646 unknown[1552]: wrote ssh authorized keys file for user: core Jun 25 20:52:00.778356 update-ssh-keys[1666]: Updated "/home/core/.ssh/authorized_keys" Jun 25 20:52:00.780322 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 20:52:00.782529 systemd[1]: Finished sshkeys.service. Jun 25 20:52:00.786220 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 20:52:00.786818 systemd[1]: Startup finished in 1.347s (kernel) + 15.353s (initrd) + 11.725s (userspace) = 28.426s. Jun 25 20:52:01.347660 sshd[1646]: pam_unix(sshd:session): session closed for user core Jun 25 20:52:01.352622 systemd[1]: sshd@1-10.230.13.114:22-139.178.89.65:54436.service: Deactivated successfully. Jun 25 20:52:01.355053 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 20:52:01.356155 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Jun 25 20:52:01.357660 systemd-logind[1493]: Removed session 4. Jun 25 20:52:01.501225 systemd[1]: Started sshd@2-10.230.13.114:22-139.178.89.65:54452.service - OpenSSH per-connection server daemon (139.178.89.65:54452). Jun 25 20:52:02.375851 sshd[1673]: Accepted publickey for core from 139.178.89.65 port 54452 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:52:02.377862 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:52:02.384349 systemd-logind[1493]: New session 5 of user core. Jun 25 20:52:02.391086 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 20:52:02.976498 sshd[1673]: pam_unix(sshd:session): session closed for user core Jun 25 20:52:02.981002 systemd[1]: sshd@2-10.230.13.114:22-139.178.89.65:54452.service: Deactivated successfully. Jun 25 20:52:02.983146 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 20:52:02.984082 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Jun 25 20:52:02.985497 systemd-logind[1493]: Removed session 5. Jun 25 20:52:03.130149 systemd[1]: Started sshd@3-10.230.13.114:22-139.178.89.65:54460.service - OpenSSH per-connection server daemon (139.178.89.65:54460). Jun 25 20:52:03.992871 sshd[1680]: Accepted publickey for core from 139.178.89.65 port 54460 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:52:03.994681 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:52:04.001395 systemd-logind[1493]: New session 6 of user core. Jun 25 20:52:04.009023 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 20:52:04.596983 sshd[1680]: pam_unix(sshd:session): session closed for user core Jun 25 20:52:04.600724 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Jun 25 20:52:04.601471 systemd[1]: sshd@3-10.230.13.114:22-139.178.89.65:54460.service: Deactivated successfully. Jun 25 20:52:04.603663 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 20:52:04.605845 systemd-logind[1493]: Removed session 6. Jun 25 20:52:04.748701 systemd[1]: Started sshd@4-10.230.13.114:22-139.178.89.65:54472.service - OpenSSH per-connection server daemon (139.178.89.65:54472). Jun 25 20:52:05.632490 sshd[1687]: Accepted publickey for core from 139.178.89.65 port 54472 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:52:05.634503 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:52:05.642921 systemd-logind[1493]: New session 7 of user core. Jun 25 20:52:05.648030 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 20:52:06.013779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 20:52:06.029099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:06.238033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:06.250754 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 20:52:06.251267 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 20:52:06.252198 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 20:52:06.265075 sudo[1693]: pam_unix(sudo:session): session closed for user root Jun 25 20:52:06.326987 kubelet[1699]: E0625 20:52:06.326891 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 20:52:06.332083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 20:52:06.332342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 20:52:06.407196 sshd[1687]: pam_unix(sshd:session): session closed for user core Jun 25 20:52:06.411106 systemd[1]: sshd@4-10.230.13.114:22-139.178.89.65:54472.service: Deactivated successfully. Jun 25 20:52:06.413240 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 20:52:06.415348 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Jun 25 20:52:06.416993 systemd-logind[1493]: Removed session 7. Jun 25 20:52:06.572210 systemd[1]: Started sshd@5-10.230.13.114:22-139.178.89.65:59584.service - OpenSSH per-connection server daemon (139.178.89.65:59584). Jun 25 20:52:07.441533 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 59584 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:52:07.443895 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:52:07.450907 systemd-logind[1493]: New session 8 of user core. Jun 25 20:52:07.462018 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 20:52:07.912264 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 20:52:07.912741 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 20:52:07.919582 sudo[1715]: pam_unix(sudo:session): session closed for user root Jun 25 20:52:07.928040 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 20:52:07.928475 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 20:52:07.951201 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 20:52:07.953749 auditctl[1718]: No rules Jun 25 20:52:07.954259 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 20:52:07.954643 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 20:52:07.963418 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 20:52:08.001859 augenrules[1736]: No rules Jun 25 20:52:08.002788 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 20:52:08.004650 sudo[1714]: pam_unix(sudo:session): session closed for user root Jun 25 20:52:08.146570 sshd[1711]: pam_unix(sshd:session): session closed for user core Jun 25 20:52:08.150763 systemd[1]: sshd@5-10.230.13.114:22-139.178.89.65:59584.service: Deactivated successfully. Jun 25 20:52:08.153336 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 20:52:08.155440 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Jun 25 20:52:08.156938 systemd-logind[1493]: Removed session 8. Jun 25 20:52:08.301208 systemd[1]: Started sshd@6-10.230.13.114:22-139.178.89.65:59596.service - OpenSSH per-connection server daemon (139.178.89.65:59596). Jun 25 20:52:09.179262 sshd[1744]: Accepted publickey for core from 139.178.89.65 port 59596 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:52:09.181603 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:52:09.188204 systemd-logind[1493]: New session 9 of user core. Jun 25 20:52:09.196112 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 20:52:09.647810 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 20:52:09.648282 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 20:52:09.855423 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 20:52:09.856545 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 20:52:10.272604 dockerd[1757]: time="2024-06-25T20:52:10.272182899Z" level=info msg="Starting up" Jun 25 20:52:10.344692 dockerd[1757]: time="2024-06-25T20:52:10.344636747Z" level=info msg="Loading containers: start." Jun 25 20:52:10.494848 kernel: Initializing XFRM netlink socket Jun 25 20:52:10.541574 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jun 25 20:52:10.606707 systemd-networkd[1442]: docker0: Link UP Jun 25 20:52:10.620101 dockerd[1757]: time="2024-06-25T20:52:10.620040796Z" level=info msg="Loading containers: done." Jun 25 20:52:10.713317 dockerd[1757]: time="2024-06-25T20:52:10.713260130Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 20:52:10.713627 dockerd[1757]: time="2024-06-25T20:52:10.713593886Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 20:52:10.713789 dockerd[1757]: time="2024-06-25T20:52:10.713757896Z" level=info msg="Daemon has completed initialization" Jun 25 20:52:10.748000 dockerd[1757]: time="2024-06-25T20:52:10.747917421Z" level=info msg="API listen on /run/docker.sock" Jun 25 20:52:10.749655 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 20:52:12.031416 systemd-resolved[1394]: Clock change detected. Flushing caches. Jun 25 20:52:12.032075 systemd-timesyncd[1410]: Contacted time server [2a02:6b66:675f::]:123 (2.flatcar.pool.ntp.org). Jun 25 20:52:12.032169 systemd-timesyncd[1410]: Initial clock synchronization to Tue 2024-06-25 20:52:12.031002 UTC. Jun 25 20:52:12.660261 containerd[1514]: time="2024-06-25T20:52:12.660101832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 20:52:13.510677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027232574.mount: Deactivated successfully. Jun 25 20:52:16.080038 containerd[1514]: time="2024-06-25T20:52:16.079932881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:16.081649 containerd[1514]: time="2024-06-25T20:52:16.081600093Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235845" Jun 25 20:52:16.082331 containerd[1514]: time="2024-06-25T20:52:16.081992669Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:16.086299 containerd[1514]: time="2024-06-25T20:52:16.086224146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:16.088212 containerd[1514]: time="2024-06-25T20:52:16.087946773Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 3.427715414s" Jun 25 20:52:16.088212 containerd[1514]: time="2024-06-25T20:52:16.088009467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 25 20:52:16.125510 containerd[1514]: time="2024-06-25T20:52:16.125411612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 20:52:17.081703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 20:52:17.091320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:17.246521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:17.262170 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 20:52:17.579114 kubelet[1956]: E0625 20:52:17.578554 1956 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 20:52:17.582393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 20:52:17.582625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 20:52:18.842689 containerd[1514]: time="2024-06-25T20:52:18.842576693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:18.844619 containerd[1514]: time="2024-06-25T20:52:18.844562870Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069755" Jun 25 20:52:18.845376 containerd[1514]: time="2024-06-25T20:52:18.845327719Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:18.850807 containerd[1514]: time="2024-06-25T20:52:18.850747046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:18.852112 containerd[1514]: time="2024-06-25T20:52:18.851775185Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.726286471s" Jun 25 20:52:18.852112 containerd[1514]: time="2024-06-25T20:52:18.851822812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 25 20:52:18.882314 containerd[1514]: time="2024-06-25T20:52:18.881971455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 20:52:20.448679 containerd[1514]: time="2024-06-25T20:52:20.447175371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:20.450142 containerd[1514]: time="2024-06-25T20:52:20.450099983Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153811" Jun 25 20:52:20.451345 containerd[1514]: time="2024-06-25T20:52:20.451302184Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:20.455125 containerd[1514]: time="2024-06-25T20:52:20.455085380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:20.457044 containerd[1514]: time="2024-06-25T20:52:20.456991604Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.574963769s" Jun 25 20:52:20.457129 containerd[1514]: time="2024-06-25T20:52:20.457048496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 25 20:52:20.485196 containerd[1514]: time="2024-06-25T20:52:20.485121188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 20:52:22.429907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153144090.mount: Deactivated successfully. Jun 25 20:52:23.156997 containerd[1514]: time="2024-06-25T20:52:23.156849157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:23.158730 containerd[1514]: time="2024-06-25T20:52:23.158668520Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409342" Jun 25 20:52:23.159875 containerd[1514]: time="2024-06-25T20:52:23.159813639Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:23.163604 containerd[1514]: time="2024-06-25T20:52:23.163550832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:23.165124 containerd[1514]: time="2024-06-25T20:52:23.164897657Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 2.679720533s" Jun 25 20:52:23.165124 containerd[1514]: time="2024-06-25T20:52:23.164947047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 25 20:52:23.195405 containerd[1514]: time="2024-06-25T20:52:23.195267050Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 20:52:23.819066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033349651.mount: Deactivated successfully. Jun 25 20:52:25.056561 containerd[1514]: time="2024-06-25T20:52:25.056447796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:25.058245 containerd[1514]: time="2024-06-25T20:52:25.058170688Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jun 25 20:52:25.059108 containerd[1514]: time="2024-06-25T20:52:25.059027101Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:25.071914 containerd[1514]: time="2024-06-25T20:52:25.071772655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:25.073998 containerd[1514]: time="2024-06-25T20:52:25.073596378Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.878266874s" Jun 25 20:52:25.073998 containerd[1514]: time="2024-06-25T20:52:25.073649106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 20:52:25.106341 containerd[1514]: time="2024-06-25T20:52:25.106161639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 20:52:25.746215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231709922.mount: Deactivated successfully. Jun 25 20:52:25.750925 containerd[1514]: time="2024-06-25T20:52:25.750848367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:25.752889 containerd[1514]: time="2024-06-25T20:52:25.752799350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 20:52:25.753959 containerd[1514]: time="2024-06-25T20:52:25.753856303Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:25.758024 containerd[1514]: time="2024-06-25T20:52:25.757958772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:25.759238 containerd[1514]: time="2024-06-25T20:52:25.758820204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 652.235981ms" Jun 25 20:52:25.759238 containerd[1514]: time="2024-06-25T20:52:25.758865110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 20:52:25.788846 containerd[1514]: time="2024-06-25T20:52:25.788717171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 20:52:26.397926 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 20:52:26.422570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910714786.mount: Deactivated successfully. Jun 25 20:52:27.830904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 20:52:27.841817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:28.361373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:28.366869 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 20:52:28.496609 kubelet[2107]: E0625 20:52:28.495910 2107 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 20:52:28.499099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 20:52:28.499430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 20:52:29.408002 containerd[1514]: time="2024-06-25T20:52:29.407923222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:29.409455 containerd[1514]: time="2024-06-25T20:52:29.409381386Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 20:52:29.410371 containerd[1514]: time="2024-06-25T20:52:29.410278000Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:29.415044 containerd[1514]: time="2024-06-25T20:52:29.414938750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:52:29.417053 containerd[1514]: time="2024-06-25T20:52:29.416873104Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.628100464s" Jun 25 20:52:29.417053 containerd[1514]: time="2024-06-25T20:52:29.416920903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 20:52:34.390208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:34.396556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:34.422973 systemd[1]: Reloading requested from client PID 2183 ('systemctl') (unit session-9.scope)... Jun 25 20:52:34.423243 systemd[1]: Reloading... Jun 25 20:52:34.593231 zram_generator::config[2217]: No configuration found. Jun 25 20:52:34.759559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 20:52:34.871000 systemd[1]: Reloading finished in 447 ms. Jun 25 20:52:34.938942 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 20:52:34.939095 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 20:52:34.939773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:34.945570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:35.084549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:35.101727 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 20:52:35.201807 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 20:52:35.202535 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 20:52:35.202535 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 20:52:35.204033 kubelet[2288]: I0625 20:52:35.203739 2288 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 20:52:35.793241 kubelet[2288]: I0625 20:52:35.792368 2288 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 20:52:35.793241 kubelet[2288]: I0625 20:52:35.792411 2288 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 20:52:35.793241 kubelet[2288]: I0625 20:52:35.792697 2288 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 20:52:35.823996 kubelet[2288]: E0625 20:52:35.823897 2288 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.13.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.824410 kubelet[2288]: I0625 20:52:35.824245 2288 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 20:52:35.846553 kubelet[2288]: I0625 20:52:35.846459 2288 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 20:52:35.847756 kubelet[2288]: I0625 20:52:35.847684 2288 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 20:52:35.849000 kubelet[2288]: I0625 20:52:35.848866 2288 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 20:52:35.849552 kubelet[2288]: I0625 20:52:35.849513 2288 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 20:52:35.849552 kubelet[2288]: I0625 20:52:35.849545 2288 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 20:52:35.849776 kubelet[2288]: I0625 20:52:35.849743 2288 state_mem.go:36] "Initialized new in-memory state store" Jun 25 20:52:35.850007 kubelet[2288]: I0625 20:52:35.849963 2288 kubelet.go:396] "Attempting to sync node with API server" Jun 25 20:52:35.850059 kubelet[2288]: I0625 20:52:35.850020 2288 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 20:52:35.850873 kubelet[2288]: I0625 20:52:35.850804 2288 kubelet.go:312] "Adding apiserver pod source" Jun 25 20:52:35.850873 kubelet[2288]: I0625 20:52:35.850856 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 20:52:35.852218 kubelet[2288]: W0625 20:52:35.852101 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.13.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-azn0z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.852322 kubelet[2288]: E0625 20:52:35.852246 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.13.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-azn0z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.853677 kubelet[2288]: W0625 20:52:35.853472 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.13.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.853677 kubelet[2288]: E0625 20:52:35.853531 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.13.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.854133 kubelet[2288]: I0625 20:52:35.854111 2288 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 20:52:35.858960 kubelet[2288]: I0625 20:52:35.858906 2288 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 20:52:35.860444 kubelet[2288]: W0625 20:52:35.860408 2288 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 20:52:35.861480 kubelet[2288]: I0625 20:52:35.861451 2288 server.go:1256] "Started kubelet" Jun 25 20:52:35.863993 kubelet[2288]: I0625 20:52:35.863107 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 20:52:35.870418 kubelet[2288]: E0625 20:52:35.870335 2288 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.13.114:6443/api/v1/namespaces/default/events\": dial tcp 10.230.13.114:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-azn0z.gb1.brightbox.com.17dc5a957e211053 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-azn0z.gb1.brightbox.com,UID:srv-azn0z.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-azn0z.gb1.brightbox.com,},FirstTimestamp:2024-06-25 20:52:35.861409875 +0000 UTC m=+0.754775037,LastTimestamp:2024-06-25 20:52:35.861409875 +0000 UTC m=+0.754775037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-azn0z.gb1.brightbox.com,}" Jun 25 20:52:35.874439 kubelet[2288]: I0625 20:52:35.873010 2288 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 20:52:35.874439 kubelet[2288]: I0625 20:52:35.874208 2288 server.go:461] "Adding debug handlers to kubelet server" Jun 25 20:52:35.875074 kubelet[2288]: I0625 20:52:35.875030 2288 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 20:52:35.875632 kubelet[2288]: I0625 20:52:35.875602 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 20:52:35.875892 kubelet[2288]: I0625 20:52:35.875864 2288 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 20:52:35.876065 kubelet[2288]: I0625 20:52:35.876041 2288 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 20:52:35.876701 kubelet[2288]: I0625 20:52:35.876272 2288 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 20:52:35.879015 kubelet[2288]: W0625 20:52:35.878966 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.13.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.879117 kubelet[2288]: E0625 20:52:35.879029 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.13.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.879225 kubelet[2288]: E0625 20:52:35.879130 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.13.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-azn0z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.13.114:6443: connect: connection refused" interval="200ms" Jun 25 20:52:35.881900 kubelet[2288]: I0625 20:52:35.881871 2288 factory.go:221] Registration of the containerd container factory successfully Jun 25 20:52:35.881900 kubelet[2288]: I0625 20:52:35.881896 2288 factory.go:221] Registration of the systemd container factory successfully Jun 25 20:52:35.882041 kubelet[2288]: I0625 20:52:35.881977 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 20:52:35.893159 kubelet[2288]: E0625 20:52:35.893105 2288 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 20:52:35.913049 kubelet[2288]: I0625 20:52:35.913013 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 20:52:35.915211 kubelet[2288]: I0625 20:52:35.914969 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 20:52:35.915211 kubelet[2288]: I0625 20:52:35.915030 2288 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 20:52:35.915211 kubelet[2288]: I0625 20:52:35.915065 2288 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 20:52:35.915211 kubelet[2288]: E0625 20:52:35.915142 2288 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 20:52:35.920990 kubelet[2288]: W0625 20:52:35.920935 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.13.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.921159 kubelet[2288]: E0625 20:52:35.921109 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.13.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:35.925765 kubelet[2288]: I0625 20:52:35.925728 2288 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 20:52:35.925765 kubelet[2288]: I0625 20:52:35.925763 2288 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 20:52:35.925901 kubelet[2288]: I0625 20:52:35.925805 2288 state_mem.go:36] "Initialized new in-memory state store" Jun 25 20:52:35.927674 kubelet[2288]: I0625 20:52:35.927640 2288 policy_none.go:49] "None policy: Start" Jun 25 20:52:35.928963 kubelet[2288]: I0625 20:52:35.928578 2288 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 20:52:35.928963 kubelet[2288]: I0625 20:52:35.928620 2288 state_mem.go:35] "Initializing new in-memory state store" Jun 25 20:52:35.937544 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 20:52:35.948276 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 20:52:35.961150 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 20:52:35.963031 kubelet[2288]: I0625 20:52:35.962986 2288 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 20:52:35.964444 kubelet[2288]: I0625 20:52:35.964000 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 20:52:35.965705 kubelet[2288]: E0625 20:52:35.965640 2288 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-azn0z.gb1.brightbox.com\" not found" Jun 25 20:52:35.978974 kubelet[2288]: I0625 20:52:35.978860 2288 kubelet_node_status.go:73] "Attempting to register node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:35.979584 kubelet[2288]: E0625 20:52:35.979316 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.13.114:6443/api/v1/nodes\": dial tcp 10.230.13.114:6443: connect: connection refused" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.016433 kubelet[2288]: I0625 20:52:36.016245 2288 topology_manager.go:215] "Topology Admit Handler" podUID="b0e705acb1defaf20326c55be72c086b" podNamespace="kube-system" podName="kube-scheduler-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.018878 kubelet[2288]: I0625 20:52:36.018851 2288 topology_manager.go:215] "Topology Admit Handler" podUID="544bce9f210d2583c3d8596a50450d58" podNamespace="kube-system" podName="kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.021287 kubelet[2288]: I0625 20:52:36.021241 2288 topology_manager.go:215] "Topology Admit Handler" podUID="5185de371d1faed22b835521cadce8ac" podNamespace="kube-system" podName="kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.031082 systemd[1]: Created slice kubepods-burstable-podb0e705acb1defaf20326c55be72c086b.slice - libcontainer container kubepods-burstable-podb0e705acb1defaf20326c55be72c086b.slice. Jun 25 20:52:36.048860 systemd[1]: Created slice kubepods-burstable-pod544bce9f210d2583c3d8596a50450d58.slice - libcontainer container kubepods-burstable-pod544bce9f210d2583c3d8596a50450d58.slice. Jun 25 20:52:36.056469 systemd[1]: Created slice kubepods-burstable-pod5185de371d1faed22b835521cadce8ac.slice - libcontainer container kubepods-burstable-pod5185de371d1faed22b835521cadce8ac.slice. Jun 25 20:52:36.079570 kubelet[2288]: E0625 20:52:36.079543 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.13.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-azn0z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.13.114:6443: connect: connection refused" interval="400ms" Jun 25 20:52:36.178021 kubelet[2288]: I0625 20:52:36.177967 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0e705acb1defaf20326c55be72c086b-kubeconfig\") pod \"kube-scheduler-srv-azn0z.gb1.brightbox.com\" (UID: \"b0e705acb1defaf20326c55be72c086b\") " pod="kube-system/kube-scheduler-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178151 kubelet[2288]: I0625 20:52:36.178036 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/544bce9f210d2583c3d8596a50450d58-k8s-certs\") pod \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" (UID: \"544bce9f210d2583c3d8596a50450d58\") " pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178151 kubelet[2288]: I0625 20:52:36.178075 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/544bce9f210d2583c3d8596a50450d58-usr-share-ca-certificates\") pod \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" (UID: \"544bce9f210d2583c3d8596a50450d58\") " pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178151 kubelet[2288]: I0625 20:52:36.178134 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-ca-certs\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178334 kubelet[2288]: I0625 20:52:36.178239 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-flexvolume-dir\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178334 kubelet[2288]: I0625 20:52:36.178275 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-k8s-certs\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178334 kubelet[2288]: I0625 20:52:36.178305 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-kubeconfig\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178500 kubelet[2288]: I0625 20:52:36.178337 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/544bce9f210d2583c3d8596a50450d58-ca-certs\") pod \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" (UID: \"544bce9f210d2583c3d8596a50450d58\") " pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.178500 kubelet[2288]: I0625 20:52:36.178373 2288 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.182497 kubelet[2288]: I0625 20:52:36.182410 2288 kubelet_node_status.go:73] "Attempting to register node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.182980 kubelet[2288]: E0625 20:52:36.182950 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.13.114:6443/api/v1/nodes\": dial tcp 10.230.13.114:6443: connect: connection refused" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.345991 containerd[1514]: time="2024-06-25T20:52:36.345873940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-azn0z.gb1.brightbox.com,Uid:b0e705acb1defaf20326c55be72c086b,Namespace:kube-system,Attempt:0,}" Jun 25 20:52:36.372443 containerd[1514]: time="2024-06-25T20:52:36.372042585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-azn0z.gb1.brightbox.com,Uid:544bce9f210d2583c3d8596a50450d58,Namespace:kube-system,Attempt:0,}" Jun 25 20:52:36.372443 containerd[1514]: time="2024-06-25T20:52:36.372057341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-azn0z.gb1.brightbox.com,Uid:5185de371d1faed22b835521cadce8ac,Namespace:kube-system,Attempt:0,}" Jun 25 20:52:36.480345 kubelet[2288]: E0625 20:52:36.480292 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.13.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-azn0z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.13.114:6443: connect: connection refused" interval="800ms" Jun 25 20:52:36.585279 kubelet[2288]: I0625 20:52:36.585241 2288 kubelet_node_status.go:73] "Attempting to register node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.586046 kubelet[2288]: E0625 20:52:36.586013 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.13.114:6443/api/v1/nodes\": dial tcp 10.230.13.114:6443: connect: connection refused" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:36.965790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133232084.mount: Deactivated successfully. Jun 25 20:52:36.968917 containerd[1514]: time="2024-06-25T20:52:36.968799941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 20:52:36.971325 containerd[1514]: time="2024-06-25T20:52:36.971248782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 20:52:36.971766 containerd[1514]: time="2024-06-25T20:52:36.971718291Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 20:52:36.972847 containerd[1514]: time="2024-06-25T20:52:36.972809641Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 20:52:36.974158 containerd[1514]: time="2024-06-25T20:52:36.974020495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 20:52:36.975124 containerd[1514]: time="2024-06-25T20:52:36.974978099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 20:52:36.975124 containerd[1514]: time="2024-06-25T20:52:36.975066420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 20:52:36.978351 containerd[1514]: time="2024-06-25T20:52:36.978312069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 20:52:36.982528 containerd[1514]: time="2024-06-25T20:52:36.981914771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.863672ms" Jun 25 20:52:36.986429 containerd[1514]: time="2024-06-25T20:52:36.986379936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.128055ms" Jun 25 20:52:36.986625 containerd[1514]: time="2024-06-25T20:52:36.986578061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.185339ms" Jun 25 20:52:37.204326 kubelet[2288]: W0625 20:52:37.200334 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.13.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.204326 kubelet[2288]: E0625 20:52:37.204291 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.13.114:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.210873 containerd[1514]: time="2024-06-25T20:52:37.209794198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:52:37.210873 containerd[1514]: time="2024-06-25T20:52:37.210606438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:37.210873 containerd[1514]: time="2024-06-25T20:52:37.210645797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:52:37.210873 containerd[1514]: time="2024-06-25T20:52:37.210661669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:37.226319 containerd[1514]: time="2024-06-25T20:52:37.225823019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:52:37.226319 containerd[1514]: time="2024-06-25T20:52:37.225899241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:37.226319 containerd[1514]: time="2024-06-25T20:52:37.225920726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:52:37.226319 containerd[1514]: time="2024-06-25T20:52:37.225935543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:37.228208 containerd[1514]: time="2024-06-25T20:52:37.227619339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:52:37.228208 containerd[1514]: time="2024-06-25T20:52:37.227695684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:37.228362 containerd[1514]: time="2024-06-25T20:52:37.227719580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:52:37.228362 containerd[1514]: time="2024-06-25T20:52:37.227743316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:37.254315 kubelet[2288]: W0625 20:52:37.250914 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.13.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-azn0z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.254315 kubelet[2288]: E0625 20:52:37.252883 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.13.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-azn0z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.262292 systemd[1]: Started cri-containerd-c1fcca305b272a4f46015c466181bf52ff39024a4d82f2b39ea30c79e3e26451.scope - libcontainer container c1fcca305b272a4f46015c466181bf52ff39024a4d82f2b39ea30c79e3e26451. Jun 25 20:52:37.278384 systemd[1]: Started cri-containerd-38b1ab38fbc36b5c391e9e88c8192273e81498bef57db026cb4443ef6e384a29.scope - libcontainer container 38b1ab38fbc36b5c391e9e88c8192273e81498bef57db026cb4443ef6e384a29. Jun 25 20:52:37.280309 systemd[1]: Started cri-containerd-8588a7a36c3a4c40e54e68ead02d34c9ff40c8f368681fde37f55b649d6e87cc.scope - libcontainer container 8588a7a36c3a4c40e54e68ead02d34c9ff40c8f368681fde37f55b649d6e87cc. Jun 25 20:52:37.284049 kubelet[2288]: E0625 20:52:37.283610 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.13.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-azn0z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.13.114:6443: connect: connection refused" interval="1.6s" Jun 25 20:52:37.379127 containerd[1514]: time="2024-06-25T20:52:37.378968344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-azn0z.gb1.brightbox.com,Uid:544bce9f210d2583c3d8596a50450d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"8588a7a36c3a4c40e54e68ead02d34c9ff40c8f368681fde37f55b649d6e87cc\"" Jun 25 20:52:37.396215 kubelet[2288]: I0625 20:52:37.395793 2288 kubelet_node_status.go:73] "Attempting to register node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:37.397207 kubelet[2288]: E0625 20:52:37.396610 2288 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.13.114:6443/api/v1/nodes\": dial tcp 10.230.13.114:6443: connect: connection refused" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:37.397304 containerd[1514]: time="2024-06-25T20:52:37.396891259Z" level=info msg="CreateContainer within sandbox \"8588a7a36c3a4c40e54e68ead02d34c9ff40c8f368681fde37f55b649d6e87cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 20:52:37.406432 containerd[1514]: time="2024-06-25T20:52:37.406360793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-azn0z.gb1.brightbox.com,Uid:5185de371d1faed22b835521cadce8ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1fcca305b272a4f46015c466181bf52ff39024a4d82f2b39ea30c79e3e26451\"" Jun 25 20:52:37.410423 containerd[1514]: time="2024-06-25T20:52:37.410352838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-azn0z.gb1.brightbox.com,Uid:b0e705acb1defaf20326c55be72c086b,Namespace:kube-system,Attempt:0,} returns sandbox id \"38b1ab38fbc36b5c391e9e88c8192273e81498bef57db026cb4443ef6e384a29\"" Jun 25 20:52:37.413547 containerd[1514]: time="2024-06-25T20:52:37.413499538Z" level=info msg="CreateContainer within sandbox \"c1fcca305b272a4f46015c466181bf52ff39024a4d82f2b39ea30c79e3e26451\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 20:52:37.416180 containerd[1514]: time="2024-06-25T20:52:37.416146264Z" level=info msg="CreateContainer within sandbox \"38b1ab38fbc36b5c391e9e88c8192273e81498bef57db026cb4443ef6e384a29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 20:52:37.421041 kubelet[2288]: W0625 20:52:37.421007 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.13.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.421253 kubelet[2288]: E0625 20:52:37.421219 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.13.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.426752 containerd[1514]: time="2024-06-25T20:52:37.426708381Z" level=info msg="CreateContainer within sandbox \"8588a7a36c3a4c40e54e68ead02d34c9ff40c8f368681fde37f55b649d6e87cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30dc2527bfa11751f772b239466d9f5be9473078f7837c150b23de16e861bbe6\"" Jun 25 20:52:37.427638 containerd[1514]: time="2024-06-25T20:52:37.427503791Z" level=info msg="StartContainer for \"30dc2527bfa11751f772b239466d9f5be9473078f7837c150b23de16e861bbe6\"" Jun 25 20:52:37.435105 containerd[1514]: time="2024-06-25T20:52:37.435057538Z" level=info msg="CreateContainer within sandbox \"c1fcca305b272a4f46015c466181bf52ff39024a4d82f2b39ea30c79e3e26451\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"680d2ac88c980d41e0073b7cc65f10ccb10ba29d75d1f47242f55ea08d43a3bd\"" Jun 25 20:52:37.437205 containerd[1514]: time="2024-06-25T20:52:37.436447448Z" level=info msg="StartContainer for \"680d2ac88c980d41e0073b7cc65f10ccb10ba29d75d1f47242f55ea08d43a3bd\"" Jun 25 20:52:37.439453 containerd[1514]: time="2024-06-25T20:52:37.439418151Z" level=info msg="CreateContainer within sandbox \"38b1ab38fbc36b5c391e9e88c8192273e81498bef57db026cb4443ef6e384a29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"45462ea3b31827715bd572476c70641b7d6d74702988947c501ce1acff307cb3\"" Jun 25 20:52:37.440064 containerd[1514]: time="2024-06-25T20:52:37.440035167Z" level=info msg="StartContainer for \"45462ea3b31827715bd572476c70641b7d6d74702988947c501ce1acff307cb3\"" Jun 25 20:52:37.465194 kubelet[2288]: W0625 20:52:37.465082 2288 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.13.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.465194 kubelet[2288]: E0625 20:52:37.465175 2288 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.13.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:37.478744 systemd[1]: Started cri-containerd-30dc2527bfa11751f772b239466d9f5be9473078f7837c150b23de16e861bbe6.scope - libcontainer container 30dc2527bfa11751f772b239466d9f5be9473078f7837c150b23de16e861bbe6. Jun 25 20:52:37.508393 systemd[1]: Started cri-containerd-680d2ac88c980d41e0073b7cc65f10ccb10ba29d75d1f47242f55ea08d43a3bd.scope - libcontainer container 680d2ac88c980d41e0073b7cc65f10ccb10ba29d75d1f47242f55ea08d43a3bd. Jun 25 20:52:37.524477 systemd[1]: Started cri-containerd-45462ea3b31827715bd572476c70641b7d6d74702988947c501ce1acff307cb3.scope - libcontainer container 45462ea3b31827715bd572476c70641b7d6d74702988947c501ce1acff307cb3. Jun 25 20:52:37.602698 containerd[1514]: time="2024-06-25T20:52:37.601844970Z" level=info msg="StartContainer for \"30dc2527bfa11751f772b239466d9f5be9473078f7837c150b23de16e861bbe6\" returns successfully" Jun 25 20:52:37.617458 containerd[1514]: time="2024-06-25T20:52:37.617405491Z" level=info msg="StartContainer for \"680d2ac88c980d41e0073b7cc65f10ccb10ba29d75d1f47242f55ea08d43a3bd\" returns successfully" Jun 25 20:52:37.650579 containerd[1514]: time="2024-06-25T20:52:37.650526285Z" level=info msg="StartContainer for \"45462ea3b31827715bd572476c70641b7d6d74702988947c501ce1acff307cb3\" returns successfully" Jun 25 20:52:37.844990 kubelet[2288]: E0625 20:52:37.844929 2288 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.13.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.13.114:6443: connect: connection refused Jun 25 20:52:38.931129 update_engine[1497]: I0625 20:52:38.928354 1497 update_attempter.cc:509] Updating boot flags... Jun 25 20:52:39.005753 kubelet[2288]: I0625 20:52:39.005693 2288 kubelet_node_status.go:73] "Attempting to register node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:39.040155 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2574) Jun 25 20:52:39.246868 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2573) Jun 25 20:52:39.389212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2573) Jun 25 20:52:40.361761 kubelet[2288]: E0625 20:52:40.361694 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-azn0z.gb1.brightbox.com\" not found" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:40.384488 kubelet[2288]: I0625 20:52:40.384258 2288 kubelet_node_status.go:76] "Successfully registered node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:40.857160 kubelet[2288]: I0625 20:52:40.857046 2288 apiserver.go:52] "Watching apiserver" Jun 25 20:52:40.876438 kubelet[2288]: I0625 20:52:40.876360 2288 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 20:52:43.013130 kubelet[2288]: W0625 20:52:43.012401 2288 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 20:52:43.334449 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-9.scope)... Jun 25 20:52:43.334477 systemd[1]: Reloading... Jun 25 20:52:43.480389 zram_generator::config[2621]: No configuration found. Jun 25 20:52:43.671634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 20:52:43.808265 systemd[1]: Reloading finished in 473 ms. Jun 25 20:52:43.882464 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:43.902039 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 20:52:43.902523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:43.902655 systemd[1]: kubelet.service: Consumed 1.291s CPU time, 108.5M memory peak, 0B memory swap peak. Jun 25 20:52:43.912282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 20:52:44.126172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 20:52:44.141884 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 20:52:44.313037 sudo[2696]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 20:52:44.313601 sudo[2696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 20:52:44.332232 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 20:52:44.332232 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 20:52:44.332232 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 20:52:44.334293 kubelet[2684]: I0625 20:52:44.333992 2684 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 20:52:44.347213 kubelet[2684]: I0625 20:52:44.347008 2684 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 20:52:44.347213 kubelet[2684]: I0625 20:52:44.347049 2684 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 20:52:44.347450 kubelet[2684]: I0625 20:52:44.347366 2684 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 20:52:44.350981 kubelet[2684]: I0625 20:52:44.350685 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 20:52:44.362657 kubelet[2684]: I0625 20:52:44.362611 2684 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 20:52:44.387585 kubelet[2684]: I0625 20:52:44.387097 2684 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 20:52:44.387585 kubelet[2684]: I0625 20:52:44.387578 2684 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 20:52:44.388361 kubelet[2684]: I0625 20:52:44.387834 2684 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 20:52:44.388361 kubelet[2684]: I0625 20:52:44.387904 2684 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 20:52:44.388361 kubelet[2684]: I0625 20:52:44.387923 2684 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 20:52:44.388361 kubelet[2684]: I0625 20:52:44.388023 2684 state_mem.go:36] "Initialized new in-memory state store" Jun 25 20:52:44.388361 kubelet[2684]: I0625 20:52:44.388247 2684 kubelet.go:396] "Attempting to sync node with API server" Jun 25 20:52:44.389645 kubelet[2684]: I0625 20:52:44.389100 2684 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 20:52:44.389645 kubelet[2684]: I0625 20:52:44.389216 2684 kubelet.go:312] "Adding apiserver pod source" Jun 25 20:52:44.389645 kubelet[2684]: I0625 20:52:44.389259 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 20:52:44.391204 kubelet[2684]: I0625 20:52:44.390642 2684 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 20:52:44.391204 kubelet[2684]: I0625 20:52:44.390933 2684 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 20:52:44.393560 kubelet[2684]: I0625 20:52:44.393295 2684 server.go:1256] "Started kubelet" Jun 25 20:52:44.403041 kubelet[2684]: I0625 20:52:44.399877 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 20:52:44.425956 kubelet[2684]: I0625 20:52:44.423147 2684 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 20:52:44.441469 kubelet[2684]: I0625 20:52:44.437126 2684 server.go:461] "Adding debug handlers to kubelet server" Jun 25 20:52:44.447866 kubelet[2684]: I0625 20:52:44.423433 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 20:52:44.447866 kubelet[2684]: I0625 20:52:44.433924 2684 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 20:52:44.447866 kubelet[2684]: I0625 20:52:44.433962 2684 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 20:52:44.456378 kubelet[2684]: I0625 20:52:44.456063 2684 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 20:52:44.457772 kubelet[2684]: I0625 20:52:44.456539 2684 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 20:52:44.462552 kubelet[2684]: I0625 20:52:44.462520 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 20:52:44.464106 kubelet[2684]: I0625 20:52:44.464083 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 20:52:44.464309 kubelet[2684]: I0625 20:52:44.464288 2684 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 20:52:44.464454 kubelet[2684]: I0625 20:52:44.464433 2684 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 20:52:44.464698 kubelet[2684]: E0625 20:52:44.464676 2684 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 20:52:44.471511 kubelet[2684]: I0625 20:52:44.471487 2684 factory.go:221] Registration of the systemd container factory successfully Jun 25 20:52:44.471875 kubelet[2684]: I0625 20:52:44.471755 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 20:52:44.478616 kubelet[2684]: I0625 20:52:44.478514 2684 factory.go:221] Registration of the containerd container factory successfully Jun 25 20:52:44.480198 kubelet[2684]: E0625 20:52:44.480096 2684 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 20:52:44.547325 kubelet[2684]: I0625 20:52:44.546877 2684 kubelet_node_status.go:73] "Attempting to register node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.558475 kubelet[2684]: I0625 20:52:44.558451 2684 kubelet_node_status.go:112] "Node was previously registered" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.558737 kubelet[2684]: I0625 20:52:44.558714 2684 kubelet_node_status.go:76] "Successfully registered node" node="srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.565828 kubelet[2684]: E0625 20:52:44.565362 2684 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 20:52:44.598745 kubelet[2684]: I0625 20:52:44.598712 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 20:52:44.598987 kubelet[2684]: I0625 20:52:44.598965 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 20:52:44.599209 kubelet[2684]: I0625 20:52:44.599106 2684 state_mem.go:36] "Initialized new in-memory state store" Jun 25 20:52:44.599691 kubelet[2684]: I0625 20:52:44.599496 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 20:52:44.599691 kubelet[2684]: I0625 20:52:44.599548 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 20:52:44.599691 kubelet[2684]: I0625 20:52:44.599571 2684 policy_none.go:49] "None policy: Start" Jun 25 20:52:44.601243 kubelet[2684]: I0625 20:52:44.600894 2684 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 20:52:44.601243 kubelet[2684]: I0625 20:52:44.600946 2684 state_mem.go:35] "Initializing new in-memory state store" Jun 25 20:52:44.601243 kubelet[2684]: I0625 20:52:44.601123 2684 state_mem.go:75] "Updated machine memory state" Jun 25 20:52:44.612307 kubelet[2684]: I0625 20:52:44.611657 2684 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 20:52:44.615098 kubelet[2684]: I0625 20:52:44.614998 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 20:52:44.766609 kubelet[2684]: I0625 20:52:44.766319 2684 topology_manager.go:215] "Topology Admit Handler" podUID="5185de371d1faed22b835521cadce8ac" podNamespace="kube-system" podName="kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.768329 kubelet[2684]: I0625 20:52:44.767672 2684 topology_manager.go:215] "Topology Admit Handler" podUID="b0e705acb1defaf20326c55be72c086b" podNamespace="kube-system" podName="kube-scheduler-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.768329 kubelet[2684]: I0625 20:52:44.767764 2684 topology_manager.go:215] "Topology Admit Handler" podUID="544bce9f210d2583c3d8596a50450d58" podNamespace="kube-system" podName="kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.781101 kubelet[2684]: W0625 20:52:44.780318 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 20:52:44.782658 kubelet[2684]: W0625 20:52:44.782627 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 20:52:44.785564 kubelet[2684]: W0625 20:52:44.785516 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 20:52:44.785658 kubelet[2684]: E0625 20:52:44.785608 2684 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866066 kubelet[2684]: I0625 20:52:44.865957 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-k8s-certs\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866066 kubelet[2684]: I0625 20:52:44.866028 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-kubeconfig\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866066 kubelet[2684]: I0625 20:52:44.866069 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866376 kubelet[2684]: I0625 20:52:44.866107 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0e705acb1defaf20326c55be72c086b-kubeconfig\") pod \"kube-scheduler-srv-azn0z.gb1.brightbox.com\" (UID: \"b0e705acb1defaf20326c55be72c086b\") " pod="kube-system/kube-scheduler-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866376 kubelet[2684]: I0625 20:52:44.866145 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-ca-certs\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866376 kubelet[2684]: I0625 20:52:44.866203 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5185de371d1faed22b835521cadce8ac-flexvolume-dir\") pod \"kube-controller-manager-srv-azn0z.gb1.brightbox.com\" (UID: \"5185de371d1faed22b835521cadce8ac\") " pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866376 kubelet[2684]: I0625 20:52:44.866273 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/544bce9f210d2583c3d8596a50450d58-ca-certs\") pod \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" (UID: \"544bce9f210d2583c3d8596a50450d58\") " pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866376 kubelet[2684]: I0625 20:52:44.866314 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/544bce9f210d2583c3d8596a50450d58-k8s-certs\") pod \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" (UID: \"544bce9f210d2583c3d8596a50450d58\") " pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:44.866639 kubelet[2684]: I0625 20:52:44.866352 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/544bce9f210d2583c3d8596a50450d58-usr-share-ca-certificates\") pod \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" (UID: \"544bce9f210d2583c3d8596a50450d58\") " pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:45.155066 sudo[2696]: pam_unix(sudo:session): session closed for user root Jun 25 20:52:45.402225 kubelet[2684]: I0625 20:52:45.402144 2684 apiserver.go:52] "Watching apiserver" Jun 25 20:52:45.448154 kubelet[2684]: I0625 20:52:45.447837 2684 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 20:52:45.534084 kubelet[2684]: W0625 20:52:45.532750 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 20:52:45.534084 kubelet[2684]: E0625 20:52:45.532848 2684 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-azn0z.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" Jun 25 20:52:45.557559 kubelet[2684]: I0625 20:52:45.557513 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-azn0z.gb1.brightbox.com" podStartSLOduration=2.5574254229999998 podStartE2EDuration="2.557425423s" podCreationTimestamp="2024-06-25 20:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:52:45.555784415 +0000 UTC m=+1.370037093" watchObservedRunningTime="2024-06-25 20:52:45.557425423 +0000 UTC m=+1.371678124" Jun 25 20:52:45.585119 kubelet[2684]: I0625 20:52:45.584541 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-azn0z.gb1.brightbox.com" podStartSLOduration=1.584482317 podStartE2EDuration="1.584482317s" podCreationTimestamp="2024-06-25 20:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:52:45.570025849 +0000 UTC m=+1.384278531" watchObservedRunningTime="2024-06-25 20:52:45.584482317 +0000 UTC m=+1.398734994" Jun 25 20:52:45.597309 kubelet[2684]: I0625 20:52:45.597242 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-azn0z.gb1.brightbox.com" podStartSLOduration=1.597164123 podStartE2EDuration="1.597164123s" podCreationTimestamp="2024-06-25 20:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:52:45.586029859 +0000 UTC m=+1.400282557" watchObservedRunningTime="2024-06-25 20:52:45.597164123 +0000 UTC m=+1.411416818" Jun 25 20:52:46.646641 sudo[1747]: pam_unix(sudo:session): session closed for user root Jun 25 20:52:46.793643 sshd[1744]: pam_unix(sshd:session): session closed for user core Jun 25 20:52:46.799376 systemd[1]: sshd@6-10.230.13.114:22-139.178.89.65:59596.service: Deactivated successfully. Jun 25 20:52:46.799510 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Jun 25 20:52:46.802560 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 20:52:46.802985 systemd[1]: session-9.scope: Consumed 7.093s CPU time, 136.4M memory peak, 0B memory swap peak. Jun 25 20:52:46.805107 systemd-logind[1493]: Removed session 9. Jun 25 20:52:57.025564 kubelet[2684]: I0625 20:52:57.025515 2684 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 20:52:57.027421 kubelet[2684]: I0625 20:52:57.026508 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 20:52:57.027507 containerd[1514]: time="2024-06-25T20:52:57.026125731Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 20:52:57.717891 kubelet[2684]: I0625 20:52:57.716922 2684 topology_manager.go:215] "Topology Admit Handler" podUID="d3d1434b-ecad-4749-9625-233d651ae5e6" podNamespace="kube-system" podName="kube-proxy-mz29h" Jun 25 20:52:57.721065 kubelet[2684]: I0625 20:52:57.720989 2684 topology_manager.go:215] "Topology Admit Handler" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" podNamespace="kube-system" podName="cilium-xr954" Jun 25 20:52:57.740559 kubelet[2684]: I0625 20:52:57.739708 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88717f99-30cc-4aab-974e-de23ae6b5074-clustermesh-secrets\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.740911 systemd[1]: Created slice kubepods-besteffort-podd3d1434b_ecad_4749_9625_233d651ae5e6.slice - libcontainer container kubepods-besteffort-podd3d1434b_ecad_4749_9625_233d651ae5e6.slice. Jun 25 20:52:57.743623 kubelet[2684]: I0625 20:52:57.742238 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-hostproc\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743623 kubelet[2684]: I0625 20:52:57.742300 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-cgroup\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743623 kubelet[2684]: I0625 20:52:57.742340 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3d1434b-ecad-4749-9625-233d651ae5e6-lib-modules\") pod \"kube-proxy-mz29h\" (UID: \"d3d1434b-ecad-4749-9625-233d651ae5e6\") " pod="kube-system/kube-proxy-mz29h" Jun 25 20:52:57.743623 kubelet[2684]: I0625 20:52:57.742373 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-net\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743623 kubelet[2684]: I0625 20:52:57.742405 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3d1434b-ecad-4749-9625-233d651ae5e6-xtables-lock\") pod \"kube-proxy-mz29h\" (UID: \"d3d1434b-ecad-4749-9625-233d651ae5e6\") " pod="kube-system/kube-proxy-mz29h" Jun 25 20:52:57.743623 kubelet[2684]: I0625 20:52:57.742445 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-bpf-maps\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743942 kubelet[2684]: I0625 20:52:57.742476 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cni-path\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743942 kubelet[2684]: I0625 20:52:57.742509 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-kernel\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743942 kubelet[2684]: I0625 20:52:57.742549 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kr8\" (UniqueName: \"kubernetes.io/projected/d3d1434b-ecad-4749-9625-233d651ae5e6-kube-api-access-42kr8\") pod \"kube-proxy-mz29h\" (UID: \"d3d1434b-ecad-4749-9625-233d651ae5e6\") " pod="kube-system/kube-proxy-mz29h" Jun 25 20:52:57.743942 kubelet[2684]: I0625 20:52:57.742587 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-run\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.743942 kubelet[2684]: I0625 20:52:57.742619 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-hubble-tls\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.744295 kubelet[2684]: I0625 20:52:57.742653 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l62\" (UniqueName: \"kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-kube-api-access-42l62\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.744295 kubelet[2684]: I0625 20:52:57.742686 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3d1434b-ecad-4749-9625-233d651ae5e6-kube-proxy\") pod \"kube-proxy-mz29h\" (UID: \"d3d1434b-ecad-4749-9625-233d651ae5e6\") " pod="kube-system/kube-proxy-mz29h" Jun 25 20:52:57.744295 kubelet[2684]: I0625 20:52:57.742720 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-etc-cni-netd\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.744295 kubelet[2684]: I0625 20:52:57.742751 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-xtables-lock\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.744295 kubelet[2684]: I0625 20:52:57.742782 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-config-path\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.744295 kubelet[2684]: I0625 20:52:57.742814 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-lib-modules\") pod \"cilium-xr954\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " pod="kube-system/cilium-xr954" Jun 25 20:52:57.763695 systemd[1]: Created slice kubepods-burstable-pod88717f99_30cc_4aab_974e_de23ae6b5074.slice - libcontainer container kubepods-burstable-pod88717f99_30cc_4aab_974e_de23ae6b5074.slice. Jun 25 20:52:57.884921 kubelet[2684]: E0625 20:52:57.884519 2684 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 20:52:57.884921 kubelet[2684]: E0625 20:52:57.884568 2684 projected.go:200] Error preparing data for projected volume kube-api-access-42l62 for pod kube-system/cilium-xr954: configmap "kube-root-ca.crt" not found Jun 25 20:52:57.884921 kubelet[2684]: E0625 20:52:57.884673 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-kube-api-access-42l62 podName:88717f99-30cc-4aab-974e-de23ae6b5074 nodeName:}" failed. No retries permitted until 2024-06-25 20:52:58.384635763 +0000 UTC m=+14.198888427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-42l62" (UniqueName: "kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-kube-api-access-42l62") pod "cilium-xr954" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074") : configmap "kube-root-ca.crt" not found Jun 25 20:52:57.885328 kubelet[2684]: E0625 20:52:57.885303 2684 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 20:52:57.885521 kubelet[2684]: E0625 20:52:57.885401 2684 projected.go:200] Error preparing data for projected volume kube-api-access-42kr8 for pod kube-system/kube-proxy-mz29h: configmap "kube-root-ca.crt" not found Jun 25 20:52:57.889206 kubelet[2684]: E0625 20:52:57.885617 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3d1434b-ecad-4749-9625-233d651ae5e6-kube-api-access-42kr8 podName:d3d1434b-ecad-4749-9625-233d651ae5e6 nodeName:}" failed. No retries permitted until 2024-06-25 20:52:58.385442116 +0000 UTC m=+14.199694785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-42kr8" (UniqueName: "kubernetes.io/projected/d3d1434b-ecad-4749-9625-233d651ae5e6-kube-api-access-42kr8") pod "kube-proxy-mz29h" (UID: "d3d1434b-ecad-4749-9625-233d651ae5e6") : configmap "kube-root-ca.crt" not found Jun 25 20:52:58.128600 kubelet[2684]: I0625 20:52:58.128538 2684 topology_manager.go:215] "Topology Admit Handler" podUID="27356319-4322-4d73-98be-e9e2aae5e698" podNamespace="kube-system" podName="cilium-operator-5cc964979-5rzrr" Jun 25 20:52:58.145804 systemd[1]: Created slice kubepods-besteffort-pod27356319_4322_4d73_98be_e9e2aae5e698.slice - libcontainer container kubepods-besteffort-pod27356319_4322_4d73_98be_e9e2aae5e698.slice. Jun 25 20:52:58.147895 kubelet[2684]: I0625 20:52:58.147864 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4lb\" (UniqueName: \"kubernetes.io/projected/27356319-4322-4d73-98be-e9e2aae5e698-kube-api-access-br4lb\") pod \"cilium-operator-5cc964979-5rzrr\" (UID: \"27356319-4322-4d73-98be-e9e2aae5e698\") " pod="kube-system/cilium-operator-5cc964979-5rzrr" Jun 25 20:52:58.148022 kubelet[2684]: I0625 20:52:58.147953 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27356319-4322-4d73-98be-e9e2aae5e698-cilium-config-path\") pod \"cilium-operator-5cc964979-5rzrr\" (UID: \"27356319-4322-4d73-98be-e9e2aae5e698\") " pod="kube-system/cilium-operator-5cc964979-5rzrr" Jun 25 20:52:58.457316 containerd[1514]: time="2024-06-25T20:52:58.456626654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5rzrr,Uid:27356319-4322-4d73-98be-e9e2aae5e698,Namespace:kube-system,Attempt:0,}" Jun 25 20:52:58.501587 containerd[1514]: time="2024-06-25T20:52:58.501047352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:52:58.501587 containerd[1514]: time="2024-06-25T20:52:58.501249302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:58.501587 containerd[1514]: time="2024-06-25T20:52:58.501332133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:52:58.501587 containerd[1514]: time="2024-06-25T20:52:58.501378067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:58.536472 systemd[1]: Started cri-containerd-36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403.scope - libcontainer container 36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403. Jun 25 20:52:58.601549 containerd[1514]: time="2024-06-25T20:52:58.601358719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5rzrr,Uid:27356319-4322-4d73-98be-e9e2aae5e698,Namespace:kube-system,Attempt:0,} returns sandbox id \"36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403\"" Jun 25 20:52:58.604939 containerd[1514]: time="2024-06-25T20:52:58.604889462Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 20:52:58.658864 containerd[1514]: time="2024-06-25T20:52:58.658787303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mz29h,Uid:d3d1434b-ecad-4749-9625-233d651ae5e6,Namespace:kube-system,Attempt:0,}" Jun 25 20:52:58.673167 containerd[1514]: time="2024-06-25T20:52:58.672832979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xr954,Uid:88717f99-30cc-4aab-974e-de23ae6b5074,Namespace:kube-system,Attempt:0,}" Jun 25 20:52:58.696195 containerd[1514]: time="2024-06-25T20:52:58.695572839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:52:58.698134 containerd[1514]: time="2024-06-25T20:52:58.696487111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:58.698134 containerd[1514]: time="2024-06-25T20:52:58.697624527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:52:58.698134 containerd[1514]: time="2024-06-25T20:52:58.697649425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:58.705897 containerd[1514]: time="2024-06-25T20:52:58.705717053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:52:58.706136 containerd[1514]: time="2024-06-25T20:52:58.705961835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:58.706136 containerd[1514]: time="2024-06-25T20:52:58.706071081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:52:58.706357 containerd[1514]: time="2024-06-25T20:52:58.706124211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:52:58.740456 systemd[1]: Started cri-containerd-600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5.scope - libcontainer container 600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5. Jun 25 20:52:58.744560 systemd[1]: Started cri-containerd-6eadac71f8fcdf624a55751fb0f07aa5f67a65b14bb54cb6575815def8642f12.scope - libcontainer container 6eadac71f8fcdf624a55751fb0f07aa5f67a65b14bb54cb6575815def8642f12. Jun 25 20:52:58.792768 containerd[1514]: time="2024-06-25T20:52:58.792721231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xr954,Uid:88717f99-30cc-4aab-974e-de23ae6b5074,Namespace:kube-system,Attempt:0,} returns sandbox id \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\"" Jun 25 20:52:58.808053 containerd[1514]: time="2024-06-25T20:52:58.807893556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mz29h,Uid:d3d1434b-ecad-4749-9625-233d651ae5e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eadac71f8fcdf624a55751fb0f07aa5f67a65b14bb54cb6575815def8642f12\"" Jun 25 20:52:58.816686 containerd[1514]: time="2024-06-25T20:52:58.816622796Z" level=info msg="CreateContainer within sandbox \"6eadac71f8fcdf624a55751fb0f07aa5f67a65b14bb54cb6575815def8642f12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 20:52:58.842491 containerd[1514]: time="2024-06-25T20:52:58.842415372Z" level=info msg="CreateContainer within sandbox \"6eadac71f8fcdf624a55751fb0f07aa5f67a65b14bb54cb6575815def8642f12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34d7b58f51e642fb914d9f33f203085b51c22c9d4fcc77d29040a8f87c4c4f5e\"" Jun 25 20:52:58.843750 containerd[1514]: time="2024-06-25T20:52:58.843706761Z" level=info msg="StartContainer for \"34d7b58f51e642fb914d9f33f203085b51c22c9d4fcc77d29040a8f87c4c4f5e\"" Jun 25 20:52:58.902396 systemd[1]: Started cri-containerd-34d7b58f51e642fb914d9f33f203085b51c22c9d4fcc77d29040a8f87c4c4f5e.scope - libcontainer container 34d7b58f51e642fb914d9f33f203085b51c22c9d4fcc77d29040a8f87c4c4f5e. Jun 25 20:52:58.945435 containerd[1514]: time="2024-06-25T20:52:58.945372327Z" level=info msg="StartContainer for \"34d7b58f51e642fb914d9f33f203085b51c22c9d4fcc77d29040a8f87c4c4f5e\" returns successfully" Jun 25 20:53:00.279748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996620138.mount: Deactivated successfully. Jun 25 20:53:01.179644 containerd[1514]: time="2024-06-25T20:53:01.179560071Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:53:01.181102 containerd[1514]: time="2024-06-25T20:53:01.181029751Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907241" Jun 25 20:53:01.181759 containerd[1514]: time="2024-06-25T20:53:01.181717940Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:53:01.186058 containerd[1514]: time="2024-06-25T20:53:01.185786824Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.580838641s" Jun 25 20:53:01.186058 containerd[1514]: time="2024-06-25T20:53:01.185910266Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 25 20:53:01.190385 containerd[1514]: time="2024-06-25T20:53:01.189911768Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 20:53:01.192539 containerd[1514]: time="2024-06-25T20:53:01.192157370Z" level=info msg="CreateContainer within sandbox \"36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 20:53:01.211510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953329071.mount: Deactivated successfully. Jun 25 20:53:01.216992 containerd[1514]: time="2024-06-25T20:53:01.215782356Z" level=info msg="CreateContainer within sandbox \"36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\"" Jun 25 20:53:01.218703 containerd[1514]: time="2024-06-25T20:53:01.218661760Z" level=info msg="StartContainer for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\"" Jun 25 20:53:01.273589 systemd[1]: Started cri-containerd-0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f.scope - libcontainer container 0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f. Jun 25 20:53:01.315253 containerd[1514]: time="2024-06-25T20:53:01.314697442Z" level=info msg="StartContainer for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" returns successfully" Jun 25 20:53:01.627992 kubelet[2684]: I0625 20:53:01.627805 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mz29h" podStartSLOduration=4.627741658 podStartE2EDuration="4.627741658s" podCreationTimestamp="2024-06-25 20:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:52:59.581608749 +0000 UTC m=+15.395861431" watchObservedRunningTime="2024-06-25 20:53:01.627741658 +0000 UTC m=+17.441994341" Jun 25 20:53:01.636874 kubelet[2684]: I0625 20:53:01.636652 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-5rzrr" podStartSLOduration=1.051136363 podStartE2EDuration="3.636597338s" podCreationTimestamp="2024-06-25 20:52:58 +0000 UTC" firstStartedPulling="2024-06-25 20:52:58.604150827 +0000 UTC m=+14.418403493" lastFinishedPulling="2024-06-25 20:53:01.189611788 +0000 UTC m=+17.003864468" observedRunningTime="2024-06-25 20:53:01.626461348 +0000 UTC m=+17.440714031" watchObservedRunningTime="2024-06-25 20:53:01.636597338 +0000 UTC m=+17.450850021" Jun 25 20:53:08.575868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586338457.mount: Deactivated successfully. Jun 25 20:53:11.623343 containerd[1514]: time="2024-06-25T20:53:11.623157329Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:53:11.626398 containerd[1514]: time="2024-06-25T20:53:11.625667260Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735291" Jun 25 20:53:11.626398 containerd[1514]: time="2024-06-25T20:53:11.626341130Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 20:53:11.628688 containerd[1514]: time="2024-06-25T20:53:11.628518021Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.438559901s" Jun 25 20:53:11.628688 containerd[1514]: time="2024-06-25T20:53:11.628565800Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 25 20:53:11.634812 containerd[1514]: time="2024-06-25T20:53:11.634770267Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 20:53:11.712734 containerd[1514]: time="2024-06-25T20:53:11.712660213Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\"" Jun 25 20:53:11.714634 containerd[1514]: time="2024-06-25T20:53:11.714578205Z" level=info msg="StartContainer for \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\"" Jun 25 20:53:11.971736 systemd[1]: run-containerd-runc-k8s.io-f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e-runc.nVhNTx.mount: Deactivated successfully. Jun 25 20:53:11.981480 systemd[1]: Started cri-containerd-f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e.scope - libcontainer container f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e. Jun 25 20:53:12.025423 containerd[1514]: time="2024-06-25T20:53:12.025362437Z" level=info msg="StartContainer for \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\" returns successfully" Jun 25 20:53:12.045546 systemd[1]: cri-containerd-f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e.scope: Deactivated successfully. Jun 25 20:53:12.352066 containerd[1514]: time="2024-06-25T20:53:12.344342147Z" level=info msg="shim disconnected" id=f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e namespace=k8s.io Jun 25 20:53:12.352646 containerd[1514]: time="2024-06-25T20:53:12.352394804Z" level=warning msg="cleaning up after shim disconnected" id=f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e namespace=k8s.io Jun 25 20:53:12.352646 containerd[1514]: time="2024-06-25T20:53:12.352425694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:53:12.621214 containerd[1514]: time="2024-06-25T20:53:12.619678750Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 20:53:12.656624 containerd[1514]: time="2024-06-25T20:53:12.656528027Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\"" Jun 25 20:53:12.658427 containerd[1514]: time="2024-06-25T20:53:12.658391846Z" level=info msg="StartContainer for \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\"" Jun 25 20:53:12.698407 systemd[1]: Started cri-containerd-ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518.scope - libcontainer container ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518. Jun 25 20:53:12.702938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e-rootfs.mount: Deactivated successfully. Jun 25 20:53:12.743040 containerd[1514]: time="2024-06-25T20:53:12.742984392Z" level=info msg="StartContainer for \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\" returns successfully" Jun 25 20:53:12.761222 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 20:53:12.761611 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 20:53:12.761770 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 20:53:12.771688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 20:53:12.772033 systemd[1]: cri-containerd-ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518.scope: Deactivated successfully. Jun 25 20:53:12.813566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518-rootfs.mount: Deactivated successfully. Jun 25 20:53:12.816389 containerd[1514]: time="2024-06-25T20:53:12.816176957Z" level=info msg="shim disconnected" id=ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518 namespace=k8s.io Jun 25 20:53:12.816634 containerd[1514]: time="2024-06-25T20:53:12.816392399Z" level=warning msg="cleaning up after shim disconnected" id=ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518 namespace=k8s.io Jun 25 20:53:12.816634 containerd[1514]: time="2024-06-25T20:53:12.816413245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:53:12.843318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 20:53:13.612518 containerd[1514]: time="2024-06-25T20:53:13.612382528Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 20:53:13.644514 containerd[1514]: time="2024-06-25T20:53:13.644133698Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\"" Jun 25 20:53:13.648091 containerd[1514]: time="2024-06-25T20:53:13.646400690Z" level=info msg="StartContainer for \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\"" Jun 25 20:53:13.695464 systemd[1]: Started cri-containerd-28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4.scope - libcontainer container 28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4. Jun 25 20:53:13.764366 systemd[1]: cri-containerd-28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4.scope: Deactivated successfully. Jun 25 20:53:13.773428 containerd[1514]: time="2024-06-25T20:53:13.773387261Z" level=info msg="StartContainer for \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\" returns successfully" Jun 25 20:53:13.801779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4-rootfs.mount: Deactivated successfully. Jun 25 20:53:13.804630 containerd[1514]: time="2024-06-25T20:53:13.804426912Z" level=info msg="shim disconnected" id=28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4 namespace=k8s.io Jun 25 20:53:13.804630 containerd[1514]: time="2024-06-25T20:53:13.804543085Z" level=warning msg="cleaning up after shim disconnected" id=28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4 namespace=k8s.io Jun 25 20:53:13.804630 containerd[1514]: time="2024-06-25T20:53:13.804561117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:53:14.621063 containerd[1514]: time="2024-06-25T20:53:14.620863891Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 20:53:14.640241 containerd[1514]: time="2024-06-25T20:53:14.640173822Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\"" Jun 25 20:53:14.642529 containerd[1514]: time="2024-06-25T20:53:14.642480981Z" level=info msg="StartContainer for \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\"" Jun 25 20:53:14.688404 systemd[1]: Started cri-containerd-2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b.scope - libcontainer container 2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b. Jun 25 20:53:14.722492 systemd[1]: cri-containerd-2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b.scope: Deactivated successfully. Jun 25 20:53:14.725967 containerd[1514]: time="2024-06-25T20:53:14.724768361Z" level=info msg="StartContainer for \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\" returns successfully" Jun 25 20:53:14.753002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b-rootfs.mount: Deactivated successfully. Jun 25 20:53:14.760207 containerd[1514]: time="2024-06-25T20:53:14.759805423Z" level=info msg="shim disconnected" id=2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b namespace=k8s.io Jun 25 20:53:14.760207 containerd[1514]: time="2024-06-25T20:53:14.759950453Z" level=warning msg="cleaning up after shim disconnected" id=2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b namespace=k8s.io Jun 25 20:53:14.760207 containerd[1514]: time="2024-06-25T20:53:14.759973348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:53:14.770263 containerd[1514]: time="2024-06-25T20:53:14.769910735Z" level=error msg="collecting metrics for 2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b" error="ttrpc: closed: unknown" Jun 25 20:53:15.624762 containerd[1514]: time="2024-06-25T20:53:15.624343898Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 20:53:15.703754 containerd[1514]: time="2024-06-25T20:53:15.703698798Z" level=info msg="CreateContainer within sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\"" Jun 25 20:53:15.709896 containerd[1514]: time="2024-06-25T20:53:15.709847107Z" level=info msg="StartContainer for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\"" Jun 25 20:53:15.773408 systemd[1]: Started cri-containerd-697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd.scope - libcontainer container 697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd. Jun 25 20:53:15.814042 containerd[1514]: time="2024-06-25T20:53:15.813620846Z" level=info msg="StartContainer for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" returns successfully" Jun 25 20:53:15.891480 systemd[1]: run-containerd-runc-k8s.io-697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd-runc.q9YCsI.mount: Deactivated successfully. Jun 25 20:53:15.998127 kubelet[2684]: I0625 20:53:15.997811 2684 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 20:53:16.028022 kubelet[2684]: I0625 20:53:16.026615 2684 topology_manager.go:215] "Topology Admit Handler" podUID="7750f013-3ec4-45e3-9e6b-80c70006f41f" podNamespace="kube-system" podName="coredns-76f75df574-2vsq7" Jun 25 20:53:16.030610 kubelet[2684]: I0625 20:53:16.030582 2684 topology_manager.go:215] "Topology Admit Handler" podUID="93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd" podNamespace="kube-system" podName="coredns-76f75df574-2dvj2" Jun 25 20:53:16.051267 systemd[1]: Created slice kubepods-burstable-pod7750f013_3ec4_45e3_9e6b_80c70006f41f.slice - libcontainer container kubepods-burstable-pod7750f013_3ec4_45e3_9e6b_80c70006f41f.slice. Jun 25 20:53:16.055797 systemd[1]: Created slice kubepods-burstable-pod93fed6c7_dd64_4975_9ab5_a4d2fd6ccfcd.slice - libcontainer container kubepods-burstable-pod93fed6c7_dd64_4975_9ab5_a4d2fd6ccfcd.slice. Jun 25 20:53:16.081234 kubelet[2684]: I0625 20:53:16.080757 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdfb\" (UniqueName: \"kubernetes.io/projected/7750f013-3ec4-45e3-9e6b-80c70006f41f-kube-api-access-ktdfb\") pod \"coredns-76f75df574-2vsq7\" (UID: \"7750f013-3ec4-45e3-9e6b-80c70006f41f\") " pod="kube-system/coredns-76f75df574-2vsq7" Jun 25 20:53:16.081234 kubelet[2684]: I0625 20:53:16.080817 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd-config-volume\") pod \"coredns-76f75df574-2dvj2\" (UID: \"93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd\") " pod="kube-system/coredns-76f75df574-2dvj2" Jun 25 20:53:16.081234 kubelet[2684]: I0625 20:53:16.080854 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbcg\" (UniqueName: \"kubernetes.io/projected/93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd-kube-api-access-cnbcg\") pod \"coredns-76f75df574-2dvj2\" (UID: \"93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd\") " pod="kube-system/coredns-76f75df574-2dvj2" Jun 25 20:53:16.081234 kubelet[2684]: I0625 20:53:16.080895 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7750f013-3ec4-45e3-9e6b-80c70006f41f-config-volume\") pod \"coredns-76f75df574-2vsq7\" (UID: \"7750f013-3ec4-45e3-9e6b-80c70006f41f\") " pod="kube-system/coredns-76f75df574-2vsq7" Jun 25 20:53:16.371946 containerd[1514]: time="2024-06-25T20:53:16.371793067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2dvj2,Uid:93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd,Namespace:kube-system,Attempt:0,}" Jun 25 20:53:16.373432 containerd[1514]: time="2024-06-25T20:53:16.373176266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2vsq7,Uid:7750f013-3ec4-45e3-9e6b-80c70006f41f,Namespace:kube-system,Attempt:0,}" Jun 25 20:53:16.645977 kubelet[2684]: I0625 20:53:16.645809 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xr954" podStartSLOduration=6.813921479 podStartE2EDuration="19.645725401s" podCreationTimestamp="2024-06-25 20:52:57 +0000 UTC" firstStartedPulling="2024-06-25 20:52:58.797286346 +0000 UTC m=+14.611539009" lastFinishedPulling="2024-06-25 20:53:11.629090263 +0000 UTC m=+27.443342931" observedRunningTime="2024-06-25 20:53:16.645630877 +0000 UTC m=+32.459883569" watchObservedRunningTime="2024-06-25 20:53:16.645725401 +0000 UTC m=+32.459978097" Jun 25 20:53:18.353510 systemd-networkd[1442]: cilium_host: Link UP Jun 25 20:53:18.355583 systemd-networkd[1442]: cilium_net: Link UP Jun 25 20:53:18.355952 systemd-networkd[1442]: cilium_net: Gained carrier Jun 25 20:53:18.365354 systemd-networkd[1442]: cilium_host: Gained carrier Jun 25 20:53:18.553324 systemd-networkd[1442]: cilium_vxlan: Link UP Jun 25 20:53:18.553338 systemd-networkd[1442]: cilium_vxlan: Gained carrier Jun 25 20:53:18.574537 systemd-networkd[1442]: cilium_net: Gained IPv6LL Jun 25 20:53:18.694393 systemd-networkd[1442]: cilium_host: Gained IPv6LL Jun 25 20:53:19.059346 kernel: NET: Registered PF_ALG protocol family Jun 25 20:53:20.095135 systemd-networkd[1442]: lxc_health: Link UP Jun 25 20:53:20.143107 systemd-networkd[1442]: lxc_health: Gained carrier Jun 25 20:53:20.467293 systemd-networkd[1442]: lxce37a5de74c53: Link UP Jun 25 20:53:20.491348 kernel: eth0: renamed from tmp8ef15 Jun 25 20:53:20.500483 systemd-networkd[1442]: cilium_vxlan: Gained IPv6LL Jun 25 20:53:20.506070 systemd-networkd[1442]: lxce37a5de74c53: Gained carrier Jun 25 20:53:20.527978 systemd-networkd[1442]: lxc6b45f1c92d78: Link UP Jun 25 20:53:20.530228 kernel: eth0: renamed from tmp32141 Jun 25 20:53:20.537407 systemd-networkd[1442]: lxc6b45f1c92d78: Gained carrier Jun 25 20:53:21.694406 systemd-networkd[1442]: lxce37a5de74c53: Gained IPv6LL Jun 25 20:53:21.758338 systemd-networkd[1442]: lxc_health: Gained IPv6LL Jun 25 20:53:22.590359 systemd-networkd[1442]: lxc6b45f1c92d78: Gained IPv6LL Jun 25 20:53:26.231159 containerd[1514]: time="2024-06-25T20:53:26.230616927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:53:26.231159 containerd[1514]: time="2024-06-25T20:53:26.230707615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:53:26.231159 containerd[1514]: time="2024-06-25T20:53:26.230736358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:53:26.231159 containerd[1514]: time="2024-06-25T20:53:26.230753209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:53:26.267169 containerd[1514]: time="2024-06-25T20:53:26.266763952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:53:26.267169 containerd[1514]: time="2024-06-25T20:53:26.266855151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:53:26.267169 containerd[1514]: time="2024-06-25T20:53:26.266951354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:53:26.267169 containerd[1514]: time="2024-06-25T20:53:26.266976945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:53:26.327023 systemd[1]: Started cri-containerd-8ef158532efa37d9440218b84a78cda3cc54e37185b0ebaed0f5eeb72daa92bf.scope - libcontainer container 8ef158532efa37d9440218b84a78cda3cc54e37185b0ebaed0f5eeb72daa92bf. Jun 25 20:53:26.336290 systemd[1]: Started cri-containerd-3214120f96f987b6149219e8b4e2eddf23c0e70356c594af24dfcc56d5dca3d9.scope - libcontainer container 3214120f96f987b6149219e8b4e2eddf23c0e70356c594af24dfcc56d5dca3d9. Jun 25 20:53:26.491937 containerd[1514]: time="2024-06-25T20:53:26.491724743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2vsq7,Uid:7750f013-3ec4-45e3-9e6b-80c70006f41f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3214120f96f987b6149219e8b4e2eddf23c0e70356c594af24dfcc56d5dca3d9\"" Jun 25 20:53:26.496322 containerd[1514]: time="2024-06-25T20:53:26.495991485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2dvj2,Uid:93fed6c7-dd64-4975-9ab5-a4d2fd6ccfcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ef158532efa37d9440218b84a78cda3cc54e37185b0ebaed0f5eeb72daa92bf\"" Jun 25 20:53:26.505938 containerd[1514]: time="2024-06-25T20:53:26.505167449Z" level=info msg="CreateContainer within sandbox \"3214120f96f987b6149219e8b4e2eddf23c0e70356c594af24dfcc56d5dca3d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 20:53:26.508103 containerd[1514]: time="2024-06-25T20:53:26.507866021Z" level=info msg="CreateContainer within sandbox \"8ef158532efa37d9440218b84a78cda3cc54e37185b0ebaed0f5eeb72daa92bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 20:53:26.538693 containerd[1514]: time="2024-06-25T20:53:26.538577625Z" level=info msg="CreateContainer within sandbox \"3214120f96f987b6149219e8b4e2eddf23c0e70356c594af24dfcc56d5dca3d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06ba40aa452456a3b6f7e99bd80aad630f186f7a8856e21b0ffd85b0e3528598\"" Jun 25 20:53:26.539938 containerd[1514]: time="2024-06-25T20:53:26.539906803Z" level=info msg="StartContainer for \"06ba40aa452456a3b6f7e99bd80aad630f186f7a8856e21b0ffd85b0e3528598\"" Jun 25 20:53:26.545178 containerd[1514]: time="2024-06-25T20:53:26.545071699Z" level=info msg="CreateContainer within sandbox \"8ef158532efa37d9440218b84a78cda3cc54e37185b0ebaed0f5eeb72daa92bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"429ae5de1b9c503c6ba3e02838295f5d8160ed29a482f52a0792431099911272\"" Jun 25 20:53:26.546656 containerd[1514]: time="2024-06-25T20:53:26.546627211Z" level=info msg="StartContainer for \"429ae5de1b9c503c6ba3e02838295f5d8160ed29a482f52a0792431099911272\"" Jun 25 20:53:26.587490 systemd[1]: Started cri-containerd-429ae5de1b9c503c6ba3e02838295f5d8160ed29a482f52a0792431099911272.scope - libcontainer container 429ae5de1b9c503c6ba3e02838295f5d8160ed29a482f52a0792431099911272. Jun 25 20:53:26.597731 systemd[1]: Started cri-containerd-06ba40aa452456a3b6f7e99bd80aad630f186f7a8856e21b0ffd85b0e3528598.scope - libcontainer container 06ba40aa452456a3b6f7e99bd80aad630f186f7a8856e21b0ffd85b0e3528598. Jun 25 20:53:26.642626 containerd[1514]: time="2024-06-25T20:53:26.642577423Z" level=info msg="StartContainer for \"429ae5de1b9c503c6ba3e02838295f5d8160ed29a482f52a0792431099911272\" returns successfully" Jun 25 20:53:26.650002 containerd[1514]: time="2024-06-25T20:53:26.649408501Z" level=info msg="StartContainer for \"06ba40aa452456a3b6f7e99bd80aad630f186f7a8856e21b0ffd85b0e3528598\" returns successfully" Jun 25 20:53:26.680203 kubelet[2684]: I0625 20:53:26.680106 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2vsq7" podStartSLOduration=28.679226229 podStartE2EDuration="28.679226229s" podCreationTimestamp="2024-06-25 20:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:53:26.677984843 +0000 UTC m=+42.492237519" watchObservedRunningTime="2024-06-25 20:53:26.679226229 +0000 UTC m=+42.493478905" Jun 25 20:53:27.238779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643674270.mount: Deactivated successfully. Jun 25 20:53:27.681533 kubelet[2684]: I0625 20:53:27.681480 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2dvj2" podStartSLOduration=29.681418478 podStartE2EDuration="29.681418478s" podCreationTimestamp="2024-06-25 20:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:53:26.699277788 +0000 UTC m=+42.513530469" watchObservedRunningTime="2024-06-25 20:53:27.681418478 +0000 UTC m=+43.495671155" Jun 25 20:53:58.014542 systemd[1]: Started sshd@7-10.230.13.114:22-139.178.89.65:46306.service - OpenSSH per-connection server daemon (139.178.89.65:46306). Jun 25 20:53:58.913672 sshd[4049]: Accepted publickey for core from 139.178.89.65 port 46306 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:53:58.916225 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:53:58.924489 systemd-logind[1493]: New session 10 of user core. Jun 25 20:53:58.933412 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 20:54:00.032529 sshd[4049]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:00.037283 systemd[1]: sshd@7-10.230.13.114:22-139.178.89.65:46306.service: Deactivated successfully. Jun 25 20:54:00.039890 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 20:54:00.040945 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Jun 25 20:54:00.042547 systemd-logind[1493]: Removed session 10. Jun 25 20:54:05.187605 systemd[1]: Started sshd@8-10.230.13.114:22-139.178.89.65:46320.service - OpenSSH per-connection server daemon (139.178.89.65:46320). Jun 25 20:54:06.070502 sshd[4065]: Accepted publickey for core from 139.178.89.65 port 46320 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:06.072622 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:06.080268 systemd-logind[1493]: New session 11 of user core. Jun 25 20:54:06.089491 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 20:54:06.782556 sshd[4065]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:06.786937 systemd[1]: sshd@8-10.230.13.114:22-139.178.89.65:46320.service: Deactivated successfully. Jun 25 20:54:06.790455 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 20:54:06.793143 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Jun 25 20:54:06.795045 systemd-logind[1493]: Removed session 11. Jun 25 20:54:11.933528 systemd[1]: Started sshd@9-10.230.13.114:22-139.178.89.65:34590.service - OpenSSH per-connection server daemon (139.178.89.65:34590). Jun 25 20:54:12.827814 sshd[4080]: Accepted publickey for core from 139.178.89.65 port 34590 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:12.830004 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:12.837079 systemd-logind[1493]: New session 12 of user core. Jun 25 20:54:12.843507 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 20:54:13.525342 sshd[4080]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:13.531276 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Jun 25 20:54:13.531842 systemd[1]: sshd@9-10.230.13.114:22-139.178.89.65:34590.service: Deactivated successfully. Jun 25 20:54:13.535500 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 20:54:13.537619 systemd-logind[1493]: Removed session 12. Jun 25 20:54:18.682627 systemd[1]: Started sshd@10-10.230.13.114:22-139.178.89.65:56126.service - OpenSSH per-connection server daemon (139.178.89.65:56126). Jun 25 20:54:19.566073 sshd[4093]: Accepted publickey for core from 139.178.89.65 port 56126 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:19.568069 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:19.575114 systemd-logind[1493]: New session 13 of user core. Jun 25 20:54:19.584391 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 20:54:20.268380 sshd[4093]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:20.275474 systemd[1]: sshd@10-10.230.13.114:22-139.178.89.65:56126.service: Deactivated successfully. Jun 25 20:54:20.278433 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 20:54:20.279500 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Jun 25 20:54:20.281272 systemd-logind[1493]: Removed session 13. Jun 25 20:54:20.422134 systemd[1]: Started sshd@11-10.230.13.114:22-139.178.89.65:56134.service - OpenSSH per-connection server daemon (139.178.89.65:56134). Jun 25 20:54:21.291671 sshd[4108]: Accepted publickey for core from 139.178.89.65 port 56134 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:21.293622 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:21.300735 systemd-logind[1493]: New session 14 of user core. Jun 25 20:54:21.308456 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 20:54:22.032192 sshd[4108]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:22.039726 systemd[1]: sshd@11-10.230.13.114:22-139.178.89.65:56134.service: Deactivated successfully. Jun 25 20:54:22.043421 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 20:54:22.044566 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Jun 25 20:54:22.046249 systemd-logind[1493]: Removed session 14. Jun 25 20:54:22.190819 systemd[1]: Started sshd@12-10.230.13.114:22-139.178.89.65:56142.service - OpenSSH per-connection server daemon (139.178.89.65:56142). Jun 25 20:54:23.072654 sshd[4118]: Accepted publickey for core from 139.178.89.65 port 56142 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:23.075002 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:23.082552 systemd-logind[1493]: New session 15 of user core. Jun 25 20:54:23.087419 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 20:54:23.760322 sshd[4118]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:23.765450 systemd[1]: sshd@12-10.230.13.114:22-139.178.89.65:56142.service: Deactivated successfully. Jun 25 20:54:23.768857 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 20:54:23.770408 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Jun 25 20:54:23.771841 systemd-logind[1493]: Removed session 15. Jun 25 20:54:28.918599 systemd[1]: Started sshd@13-10.230.13.114:22-139.178.89.65:55226.service - OpenSSH per-connection server daemon (139.178.89.65:55226). Jun 25 20:54:29.809043 sshd[4131]: Accepted publickey for core from 139.178.89.65 port 55226 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:29.811241 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:29.817837 systemd-logind[1493]: New session 16 of user core. Jun 25 20:54:29.827460 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 20:54:30.515698 sshd[4131]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:30.521111 systemd[1]: sshd@13-10.230.13.114:22-139.178.89.65:55226.service: Deactivated successfully. Jun 25 20:54:30.524117 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 20:54:30.525631 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Jun 25 20:54:30.527085 systemd-logind[1493]: Removed session 16. Jun 25 20:54:30.671566 systemd[1]: Started sshd@14-10.230.13.114:22-139.178.89.65:55234.service - OpenSSH per-connection server daemon (139.178.89.65:55234). Jun 25 20:54:31.543045 sshd[4146]: Accepted publickey for core from 139.178.89.65 port 55234 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:31.545247 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:31.551967 systemd-logind[1493]: New session 17 of user core. Jun 25 20:54:31.560406 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 20:54:32.583287 sshd[4146]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:32.590626 systemd[1]: sshd@14-10.230.13.114:22-139.178.89.65:55234.service: Deactivated successfully. Jun 25 20:54:32.594523 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 20:54:32.597241 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Jun 25 20:54:32.599905 systemd-logind[1493]: Removed session 17. Jun 25 20:54:32.740408 systemd[1]: Started sshd@15-10.230.13.114:22-139.178.89.65:55236.service - OpenSSH per-connection server daemon (139.178.89.65:55236). Jun 25 20:54:33.632242 sshd[4157]: Accepted publickey for core from 139.178.89.65 port 55236 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:33.634213 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:33.641547 systemd-logind[1493]: New session 18 of user core. Jun 25 20:54:33.647395 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 20:54:36.426023 sshd[4157]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:36.441843 systemd[1]: sshd@15-10.230.13.114:22-139.178.89.65:55236.service: Deactivated successfully. Jun 25 20:54:36.444953 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 20:54:36.447261 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Jun 25 20:54:36.449365 systemd-logind[1493]: Removed session 18. Jun 25 20:54:36.580571 systemd[1]: Started sshd@16-10.230.13.114:22-139.178.89.65:37082.service - OpenSSH per-connection server daemon (139.178.89.65:37082). Jun 25 20:54:37.462847 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 37082 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:37.465295 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:37.472026 systemd-logind[1493]: New session 19 of user core. Jun 25 20:54:37.481423 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 20:54:38.349519 sshd[4175]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:38.355068 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Jun 25 20:54:38.355660 systemd[1]: sshd@16-10.230.13.114:22-139.178.89.65:37082.service: Deactivated successfully. Jun 25 20:54:38.357946 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 20:54:38.359176 systemd-logind[1493]: Removed session 19. Jun 25 20:54:38.503507 systemd[1]: Started sshd@17-10.230.13.114:22-139.178.89.65:37084.service - OpenSSH per-connection server daemon (139.178.89.65:37084). Jun 25 20:54:39.371306 sshd[4186]: Accepted publickey for core from 139.178.89.65 port 37084 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:39.373395 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:39.381287 systemd-logind[1493]: New session 20 of user core. Jun 25 20:54:39.389393 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 20:54:40.054574 sshd[4186]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:40.058418 systemd[1]: sshd@17-10.230.13.114:22-139.178.89.65:37084.service: Deactivated successfully. Jun 25 20:54:40.061527 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 20:54:40.063887 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Jun 25 20:54:40.065277 systemd-logind[1493]: Removed session 20. Jun 25 20:54:45.204332 systemd[1]: Started sshd@18-10.230.13.114:22-139.178.89.65:37086.service - OpenSSH per-connection server daemon (139.178.89.65:37086). Jun 25 20:54:46.074350 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 37086 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:46.076345 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:46.082954 systemd-logind[1493]: New session 21 of user core. Jun 25 20:54:46.091435 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 20:54:46.759032 sshd[4204]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:46.764179 systemd[1]: sshd@18-10.230.13.114:22-139.178.89.65:37086.service: Deactivated successfully. Jun 25 20:54:46.766532 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 20:54:46.767751 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Jun 25 20:54:46.769573 systemd-logind[1493]: Removed session 21. Jun 25 20:54:51.919519 systemd[1]: Started sshd@19-10.230.13.114:22-139.178.89.65:42890.service - OpenSSH per-connection server daemon (139.178.89.65:42890). Jun 25 20:54:52.791356 sshd[4217]: Accepted publickey for core from 139.178.89.65 port 42890 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:52.793269 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:52.800511 systemd-logind[1493]: New session 22 of user core. Jun 25 20:54:52.807397 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 20:54:53.488120 sshd[4217]: pam_unix(sshd:session): session closed for user core Jun 25 20:54:53.493008 systemd[1]: sshd@19-10.230.13.114:22-139.178.89.65:42890.service: Deactivated successfully. Jun 25 20:54:53.495437 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 20:54:53.496471 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Jun 25 20:54:53.498334 systemd-logind[1493]: Removed session 22. Jun 25 20:54:58.640110 systemd[1]: Started sshd@20-10.230.13.114:22-139.178.89.65:54872.service - OpenSSH per-connection server daemon (139.178.89.65:54872). Jun 25 20:54:59.523046 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 54872 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:54:59.525266 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:54:59.531641 systemd-logind[1493]: New session 23 of user core. Jun 25 20:54:59.538390 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 20:55:00.216703 sshd[4230]: pam_unix(sshd:session): session closed for user core Jun 25 20:55:00.222398 systemd[1]: sshd@20-10.230.13.114:22-139.178.89.65:54872.service: Deactivated successfully. Jun 25 20:55:00.224893 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 20:55:00.226041 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Jun 25 20:55:00.227750 systemd-logind[1493]: Removed session 23. Jun 25 20:55:00.376559 systemd[1]: Started sshd@21-10.230.13.114:22-139.178.89.65:54888.service - OpenSSH per-connection server daemon (139.178.89.65:54888). Jun 25 20:55:01.254322 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 54888 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:55:01.256656 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:55:01.264893 systemd-logind[1493]: New session 24 of user core. Jun 25 20:55:01.274572 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 20:55:03.185594 containerd[1514]: time="2024-06-25T20:55:03.184849245Z" level=info msg="StopContainer for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" with timeout 30 (s)" Jun 25 20:55:03.194215 containerd[1514]: time="2024-06-25T20:55:03.194151589Z" level=info msg="Stop container \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" with signal terminated" Jun 25 20:55:03.261936 systemd[1]: run-containerd-runc-k8s.io-697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd-runc.z2AcRZ.mount: Deactivated successfully. Jun 25 20:55:03.264248 systemd[1]: cri-containerd-0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f.scope: Deactivated successfully. Jun 25 20:55:03.306936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f-rootfs.mount: Deactivated successfully. Jun 25 20:55:03.308362 containerd[1514]: time="2024-06-25T20:55:03.306818149Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 20:55:03.320855 containerd[1514]: time="2024-06-25T20:55:03.320711985Z" level=info msg="shim disconnected" id=0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f namespace=k8s.io Jun 25 20:55:03.320855 containerd[1514]: time="2024-06-25T20:55:03.320841547Z" level=warning msg="cleaning up after shim disconnected" id=0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f namespace=k8s.io Jun 25 20:55:03.321851 containerd[1514]: time="2024-06-25T20:55:03.320867182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:03.327001 containerd[1514]: time="2024-06-25T20:55:03.326917622Z" level=info msg="StopContainer for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" with timeout 2 (s)" Jun 25 20:55:03.327670 containerd[1514]: time="2024-06-25T20:55:03.327638292Z" level=info msg="Stop container \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" with signal terminated" Jun 25 20:55:03.347940 systemd-networkd[1442]: lxc_health: Link DOWN Jun 25 20:55:03.347953 systemd-networkd[1442]: lxc_health: Lost carrier Jun 25 20:55:03.364417 systemd[1]: cri-containerd-697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd.scope: Deactivated successfully. Jun 25 20:55:03.364850 systemd[1]: cri-containerd-697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd.scope: Consumed 10.163s CPU time. Jun 25 20:55:03.376288 containerd[1514]: time="2024-06-25T20:55:03.376144742Z" level=info msg="StopContainer for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" returns successfully" Jun 25 20:55:03.391718 containerd[1514]: time="2024-06-25T20:55:03.391619501Z" level=info msg="StopPodSandbox for \"36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403\"" Jun 25 20:55:03.394416 containerd[1514]: time="2024-06-25T20:55:03.391820985Z" level=info msg="Container to stop \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 20:55:03.397479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403-shm.mount: Deactivated successfully. Jun 25 20:55:03.408953 systemd[1]: cri-containerd-36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403.scope: Deactivated successfully. Jun 25 20:55:03.417295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd-rootfs.mount: Deactivated successfully. Jun 25 20:55:03.425453 containerd[1514]: time="2024-06-25T20:55:03.425237412Z" level=info msg="shim disconnected" id=697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd namespace=k8s.io Jun 25 20:55:03.425453 containerd[1514]: time="2024-06-25T20:55:03.425413319Z" level=warning msg="cleaning up after shim disconnected" id=697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd namespace=k8s.io Jun 25 20:55:03.425889 containerd[1514]: time="2024-06-25T20:55:03.425430336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:03.450141 containerd[1514]: time="2024-06-25T20:55:03.449927018Z" level=info msg="StopContainer for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" returns successfully" Jun 25 20:55:03.453492 containerd[1514]: time="2024-06-25T20:55:03.452839171Z" level=info msg="StopPodSandbox for \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\"" Jun 25 20:55:03.453492 containerd[1514]: time="2024-06-25T20:55:03.452901812Z" level=info msg="Container to stop \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 20:55:03.453492 containerd[1514]: time="2024-06-25T20:55:03.452952173Z" level=info msg="Container to stop \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 20:55:03.453492 containerd[1514]: time="2024-06-25T20:55:03.452969936Z" level=info msg="Container to stop \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 20:55:03.453492 containerd[1514]: time="2024-06-25T20:55:03.452986321Z" level=info msg="Container to stop \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 20:55:03.453492 containerd[1514]: time="2024-06-25T20:55:03.453001801Z" level=info msg="Container to stop \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 20:55:03.464161 systemd[1]: cri-containerd-600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5.scope: Deactivated successfully. Jun 25 20:55:03.467949 containerd[1514]: time="2024-06-25T20:55:03.467428300Z" level=info msg="shim disconnected" id=36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403 namespace=k8s.io Jun 25 20:55:03.467949 containerd[1514]: time="2024-06-25T20:55:03.467884135Z" level=warning msg="cleaning up after shim disconnected" id=36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403 namespace=k8s.io Jun 25 20:55:03.467949 containerd[1514]: time="2024-06-25T20:55:03.467905689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:03.494561 containerd[1514]: time="2024-06-25T20:55:03.494064470Z" level=info msg="TearDown network for sandbox \"36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403\" successfully" Jun 25 20:55:03.494561 containerd[1514]: time="2024-06-25T20:55:03.494110160Z" level=info msg="StopPodSandbox for \"36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403\" returns successfully" Jun 25 20:55:03.503955 containerd[1514]: time="2024-06-25T20:55:03.503733429Z" level=info msg="shim disconnected" id=600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5 namespace=k8s.io Jun 25 20:55:03.503955 containerd[1514]: time="2024-06-25T20:55:03.503792355Z" level=warning msg="cleaning up after shim disconnected" id=600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5 namespace=k8s.io Jun 25 20:55:03.503955 containerd[1514]: time="2024-06-25T20:55:03.503808583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:03.529579 containerd[1514]: time="2024-06-25T20:55:03.529428635Z" level=info msg="TearDown network for sandbox \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" successfully" Jun 25 20:55:03.529579 containerd[1514]: time="2024-06-25T20:55:03.529583899Z" level=info msg="StopPodSandbox for \"600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5\" returns successfully" Jun 25 20:55:03.648559 kubelet[2684]: I0625 20:55:03.648002 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27356319-4322-4d73-98be-e9e2aae5e698-cilium-config-path\") pod \"27356319-4322-4d73-98be-e9e2aae5e698\" (UID: \"27356319-4322-4d73-98be-e9e2aae5e698\") " Jun 25 20:55:03.648559 kubelet[2684]: I0625 20:55:03.648083 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-bpf-maps\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.648559 kubelet[2684]: I0625 20:55:03.648116 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-cgroup\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.648559 kubelet[2684]: I0625 20:55:03.648144 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-etc-cni-netd\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.648559 kubelet[2684]: I0625 20:55:03.648197 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-config-path\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.648559 kubelet[2684]: I0625 20:55:03.648276 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br4lb\" (UniqueName: \"kubernetes.io/projected/27356319-4322-4d73-98be-e9e2aae5e698-kube-api-access-br4lb\") pod \"27356319-4322-4d73-98be-e9e2aae5e698\" (UID: \"27356319-4322-4d73-98be-e9e2aae5e698\") " Jun 25 20:55:03.650673 kubelet[2684]: I0625 20:55:03.648311 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-lib-modules\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.650673 kubelet[2684]: I0625 20:55:03.648338 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-hostproc\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.650673 kubelet[2684]: I0625 20:55:03.648368 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-hubble-tls\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.650673 kubelet[2684]: I0625 20:55:03.648394 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-xtables-lock\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.650673 kubelet[2684]: I0625 20:55:03.648420 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cni-path\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.650917 kubelet[2684]: I0625 20:55:03.650801 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.650917 kubelet[2684]: I0625 20:55:03.650869 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.650917 kubelet[2684]: I0625 20:55:03.650902 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.651558 kubelet[2684]: I0625 20:55:03.648497 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27356319-4322-4d73-98be-e9e2aae5e698-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27356319-4322-4d73-98be-e9e2aae5e698" (UID: "27356319-4322-4d73-98be-e9e2aae5e698"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 20:55:03.651558 kubelet[2684]: I0625 20:55:03.648498 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cni-path" (OuterVolumeSpecName: "cni-path") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.651558 kubelet[2684]: I0625 20:55:03.651133 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-hostproc" (OuterVolumeSpecName: "hostproc") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.654396 kubelet[2684]: I0625 20:55:03.654365 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 20:55:03.665702 kubelet[2684]: I0625 20:55:03.665568 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27356319-4322-4d73-98be-e9e2aae5e698-kube-api-access-br4lb" (OuterVolumeSpecName: "kube-api-access-br4lb") pod "27356319-4322-4d73-98be-e9e2aae5e698" (UID: "27356319-4322-4d73-98be-e9e2aae5e698"). InnerVolumeSpecName "kube-api-access-br4lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 20:55:03.665702 kubelet[2684]: I0625 20:55:03.665638 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.665702 kubelet[2684]: I0625 20:55:03.665640 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 20:55:03.665702 kubelet[2684]: I0625 20:55:03.665679 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.749577 kubelet[2684]: I0625 20:55:03.749414 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-net\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.749577 kubelet[2684]: I0625 20:55:03.749483 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-kernel\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.749577 kubelet[2684]: I0625 20:55:03.749545 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88717f99-30cc-4aab-974e-de23ae6b5074-clustermesh-secrets\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.749577 kubelet[2684]: I0625 20:55:03.749585 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-run\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.749957 kubelet[2684]: I0625 20:55:03.749627 2684 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l62\" (UniqueName: \"kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-kube-api-access-42l62\") pod \"88717f99-30cc-4aab-974e-de23ae6b5074\" (UID: \"88717f99-30cc-4aab-974e-de23ae6b5074\") " Jun 25 20:55:03.751787 kubelet[2684]: I0625 20:55:03.750355 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.751787 kubelet[2684]: I0625 20:55:03.750412 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.755509 kubelet[2684]: I0625 20:55:03.755392 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88717f99-30cc-4aab-974e-de23ae6b5074-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 20:55:03.755509 kubelet[2684]: I0625 20:55:03.755459 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.755900 2684 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-br4lb\" (UniqueName: \"kubernetes.io/projected/27356319-4322-4d73-98be-e9e2aae5e698-kube-api-access-br4lb\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.755941 2684 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-lib-modules\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.755991 2684 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-hostproc\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.756016 2684 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-hubble-tls\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.756033 2684 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-xtables-lock\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.756049 2684 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cni-path\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.756080 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27356319-4322-4d73-98be-e9e2aae5e698-cilium-config-path\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756232 kubelet[2684]: I0625 20:55:03.756097 2684 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-bpf-maps\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756612 kubelet[2684]: I0625 20:55:03.756118 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-cgroup\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756612 kubelet[2684]: I0625 20:55:03.756134 2684 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-etc-cni-netd\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.756612 kubelet[2684]: I0625 20:55:03.756151 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-config-path\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.759586 kubelet[2684]: I0625 20:55:03.759540 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-kube-api-access-42l62" (OuterVolumeSpecName: "kube-api-access-42l62") pod "88717f99-30cc-4aab-974e-de23ae6b5074" (UID: "88717f99-30cc-4aab-974e-de23ae6b5074"). InnerVolumeSpecName "kube-api-access-42l62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 20:55:03.857431 kubelet[2684]: I0625 20:55:03.857315 2684 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88717f99-30cc-4aab-974e-de23ae6b5074-clustermesh-secrets\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.857431 kubelet[2684]: I0625 20:55:03.857419 2684 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-cilium-run\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.857431 kubelet[2684]: I0625 20:55:03.857443 2684 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-42l62\" (UniqueName: \"kubernetes.io/projected/88717f99-30cc-4aab-974e-de23ae6b5074-kube-api-access-42l62\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.857744 kubelet[2684]: I0625 20:55:03.857461 2684 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-net\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.857744 kubelet[2684]: I0625 20:55:03.857479 2684 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88717f99-30cc-4aab-974e-de23ae6b5074-host-proc-sys-kernel\") on node \"srv-azn0z.gb1.brightbox.com\" DevicePath \"\"" Jun 25 20:55:03.934052 systemd[1]: Removed slice kubepods-burstable-pod88717f99_30cc_4aab_974e_de23ae6b5074.slice - libcontainer container kubepods-burstable-pod88717f99_30cc_4aab_974e_de23ae6b5074.slice. Jun 25 20:55:03.934350 systemd[1]: kubepods-burstable-pod88717f99_30cc_4aab_974e_de23ae6b5074.slice: Consumed 10.275s CPU time. Jun 25 20:55:03.961643 kubelet[2684]: I0625 20:55:03.961594 2684 scope.go:117] "RemoveContainer" containerID="697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd" Jun 25 20:55:03.965308 containerd[1514]: time="2024-06-25T20:55:03.965261014Z" level=info msg="RemoveContainer for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\"" Jun 25 20:55:03.977550 systemd[1]: Removed slice kubepods-besteffort-pod27356319_4322_4d73_98be_e9e2aae5e698.slice - libcontainer container kubepods-besteffort-pod27356319_4322_4d73_98be_e9e2aae5e698.slice. Jun 25 20:55:03.978971 containerd[1514]: time="2024-06-25T20:55:03.977957665Z" level=info msg="RemoveContainer for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" returns successfully" Jun 25 20:55:03.981255 kubelet[2684]: I0625 20:55:03.980586 2684 scope.go:117] "RemoveContainer" containerID="2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b" Jun 25 20:55:03.985469 containerd[1514]: time="2024-06-25T20:55:03.984250023Z" level=info msg="RemoveContainer for \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\"" Jun 25 20:55:03.990486 containerd[1514]: time="2024-06-25T20:55:03.990433633Z" level=info msg="RemoveContainer for \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\" returns successfully" Jun 25 20:55:03.991074 kubelet[2684]: I0625 20:55:03.991043 2684 scope.go:117] "RemoveContainer" containerID="28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4" Jun 25 20:55:03.993838 containerd[1514]: time="2024-06-25T20:55:03.993482588Z" level=info msg="RemoveContainer for \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\"" Jun 25 20:55:04.002567 containerd[1514]: time="2024-06-25T20:55:04.000967789Z" level=info msg="RemoveContainer for \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\" returns successfully" Jun 25 20:55:04.002677 kubelet[2684]: I0625 20:55:04.001324 2684 scope.go:117] "RemoveContainer" containerID="ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518" Jun 25 20:55:04.004700 containerd[1514]: time="2024-06-25T20:55:04.004665425Z" level=info msg="RemoveContainer for \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\"" Jun 25 20:55:04.007789 containerd[1514]: time="2024-06-25T20:55:04.007756825Z" level=info msg="RemoveContainer for \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\" returns successfully" Jun 25 20:55:04.008031 kubelet[2684]: I0625 20:55:04.008004 2684 scope.go:117] "RemoveContainer" containerID="f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e" Jun 25 20:55:04.010428 containerd[1514]: time="2024-06-25T20:55:04.010270928Z" level=info msg="RemoveContainer for \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\"" Jun 25 20:55:04.013622 containerd[1514]: time="2024-06-25T20:55:04.013553598Z" level=info msg="RemoveContainer for \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\" returns successfully" Jun 25 20:55:04.014094 containerd[1514]: time="2024-06-25T20:55:04.013958328Z" level=error msg="ContainerStatus for \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\": not found" Jun 25 20:55:04.014165 kubelet[2684]: I0625 20:55:04.013733 2684 scope.go:117] "RemoveContainer" containerID="697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd" Jun 25 20:55:04.026748 kubelet[2684]: E0625 20:55:04.026659 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\": not found" containerID="697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd" Jun 25 20:55:04.033110 kubelet[2684]: I0625 20:55:04.033029 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd"} err="failed to get container status \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"697dcab583ae0726aa96a719aefaf6d348a34c42ed330acecdf829f12a5f92dd\": not found" Jun 25 20:55:04.033110 kubelet[2684]: I0625 20:55:04.033086 2684 scope.go:117] "RemoveContainer" containerID="2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b" Jun 25 20:55:04.033578 containerd[1514]: time="2024-06-25T20:55:04.033517246Z" level=error msg="ContainerStatus for \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\": not found" Jun 25 20:55:04.033809 kubelet[2684]: E0625 20:55:04.033760 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\": not found" containerID="2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b" Jun 25 20:55:04.033809 kubelet[2684]: I0625 20:55:04.033805 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b"} err="failed to get container status \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ccd1ef11bf0b62a7b4c79bb00af54f7e61ee67e54a7519b5ceaa310da1f293b\": not found" Jun 25 20:55:04.033942 kubelet[2684]: I0625 20:55:04.033826 2684 scope.go:117] "RemoveContainer" containerID="28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4" Jun 25 20:55:04.034452 containerd[1514]: time="2024-06-25T20:55:04.034334042Z" level=error msg="ContainerStatus for \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\": not found" Jun 25 20:55:04.034549 kubelet[2684]: E0625 20:55:04.034520 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\": not found" containerID="28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4" Jun 25 20:55:04.034619 kubelet[2684]: I0625 20:55:04.034567 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4"} err="failed to get container status \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"28939380472e09ab2f060f64e0612e9f6b105aa7d13b363e3c7ac6351c4ad5d4\": not found" Jun 25 20:55:04.034619 kubelet[2684]: I0625 20:55:04.034584 2684 scope.go:117] "RemoveContainer" containerID="ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518" Jun 25 20:55:04.034963 containerd[1514]: time="2024-06-25T20:55:04.034910096Z" level=error msg="ContainerStatus for \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\": not found" Jun 25 20:55:04.035198 kubelet[2684]: E0625 20:55:04.035144 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\": not found" containerID="ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518" Jun 25 20:55:04.035270 kubelet[2684]: I0625 20:55:04.035207 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518"} err="failed to get container status \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae4eb197f4b2ef779fb8d3a60c0cd3d2ffe967974a3c8b57e90870addcb59518\": not found" Jun 25 20:55:04.035270 kubelet[2684]: I0625 20:55:04.035226 2684 scope.go:117] "RemoveContainer" containerID="f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e" Jun 25 20:55:04.035628 containerd[1514]: time="2024-06-25T20:55:04.035460765Z" level=error msg="ContainerStatus for \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\": not found" Jun 25 20:55:04.036067 kubelet[2684]: E0625 20:55:04.035838 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\": not found" containerID="f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e" Jun 25 20:55:04.036067 kubelet[2684]: I0625 20:55:04.035881 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e"} err="failed to get container status \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9e89cf141804455f26c03bc44972946be2c276ab692a5584fa11a7030068b9e\": not found" Jun 25 20:55:04.036067 kubelet[2684]: I0625 20:55:04.035900 2684 scope.go:117] "RemoveContainer" containerID="0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f" Jun 25 20:55:04.037335 containerd[1514]: time="2024-06-25T20:55:04.037301901Z" level=info msg="RemoveContainer for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\"" Jun 25 20:55:04.040506 containerd[1514]: time="2024-06-25T20:55:04.040469025Z" level=info msg="RemoveContainer for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" returns successfully" Jun 25 20:55:04.040685 kubelet[2684]: I0625 20:55:04.040669 2684 scope.go:117] "RemoveContainer" containerID="0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f" Jun 25 20:55:04.041223 containerd[1514]: time="2024-06-25T20:55:04.040953464Z" level=error msg="ContainerStatus for \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\": not found" Jun 25 20:55:04.041324 kubelet[2684]: E0625 20:55:04.041148 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\": not found" containerID="0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f" Jun 25 20:55:04.041324 kubelet[2684]: I0625 20:55:04.041198 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f"} err="failed to get container status \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0573ef7457b1c5c019f8807f2a2f1897cecd734ffbce1648900f0a45e7e4221f\": not found" Jun 25 20:55:04.253408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5-rootfs.mount: Deactivated successfully. Jun 25 20:55:04.253592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-600186f6f4cf892e76d2421d822b07206eeb8f1e16c396070c18d15ab230a8b5-shm.mount: Deactivated successfully. Jun 25 20:55:04.253707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36acfad98d0555b3075b0493cf6e1d0fa0f58af995033ebe48c2fa2497240403-rootfs.mount: Deactivated successfully. Jun 25 20:55:04.253881 systemd[1]: var-lib-kubelet-pods-88717f99\x2d30cc\x2d4aab\x2d974e\x2dde23ae6b5074-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d42l62.mount: Deactivated successfully. Jun 25 20:55:04.254002 systemd[1]: var-lib-kubelet-pods-27356319\x2d4322\x2d4d73\x2d98be\x2de9e2aae5e698-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr4lb.mount: Deactivated successfully. Jun 25 20:55:04.254135 systemd[1]: var-lib-kubelet-pods-88717f99\x2d30cc\x2d4aab\x2d974e\x2dde23ae6b5074-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 20:55:04.254347 systemd[1]: var-lib-kubelet-pods-88717f99\x2d30cc\x2d4aab\x2d974e\x2dde23ae6b5074-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 20:55:04.478789 kubelet[2684]: I0625 20:55:04.478747 2684 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="27356319-4322-4d73-98be-e9e2aae5e698" path="/var/lib/kubelet/pods/27356319-4322-4d73-98be-e9e2aae5e698/volumes" Jun 25 20:55:04.479980 kubelet[2684]: I0625 20:55:04.479957 2684 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" path="/var/lib/kubelet/pods/88717f99-30cc-4aab-974e-de23ae6b5074/volumes" Jun 25 20:55:04.662387 kubelet[2684]: E0625 20:55:04.662285 2684 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 20:55:05.257879 sshd[4245]: pam_unix(sshd:session): session closed for user core Jun 25 20:55:05.262098 systemd[1]: sshd@21-10.230.13.114:22-139.178.89.65:54888.service: Deactivated successfully. Jun 25 20:55:05.265028 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 20:55:05.267078 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Jun 25 20:55:05.268701 systemd-logind[1493]: Removed session 24. Jun 25 20:55:05.417657 systemd[1]: Started sshd@22-10.230.13.114:22-139.178.89.65:54894.service - OpenSSH per-connection server daemon (139.178.89.65:54894). Jun 25 20:55:06.307860 sshd[4405]: Accepted publickey for core from 139.178.89.65 port 54894 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:55:06.309881 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:55:06.317697 systemd-logind[1493]: New session 25 of user core. Jun 25 20:55:06.321715 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 20:55:07.266541 kubelet[2684]: I0625 20:55:07.266493 2684 setters.go:568] "Node became not ready" node="srv-azn0z.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T20:55:07Z","lastTransitionTime":"2024-06-25T20:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 20:55:07.631276 kubelet[2684]: I0625 20:55:07.630676 2684 topology_manager.go:215] "Topology Admit Handler" podUID="56adb1e0-bc8c-467b-97e4-96cc727a8564" podNamespace="kube-system" podName="cilium-jzszb" Jun 25 20:55:07.633088 kubelet[2684]: E0625 20:55:07.632621 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" containerName="clean-cilium-state" Jun 25 20:55:07.633088 kubelet[2684]: E0625 20:55:07.632670 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" containerName="cilium-agent" Jun 25 20:55:07.633088 kubelet[2684]: E0625 20:55:07.632691 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27356319-4322-4d73-98be-e9e2aae5e698" containerName="cilium-operator" Jun 25 20:55:07.633088 kubelet[2684]: E0625 20:55:07.632703 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" containerName="mount-cgroup" Jun 25 20:55:07.633088 kubelet[2684]: E0625 20:55:07.632714 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" containerName="mount-bpf-fs" Jun 25 20:55:07.633088 kubelet[2684]: E0625 20:55:07.632726 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" containerName="apply-sysctl-overwrites" Jun 25 20:55:07.633088 kubelet[2684]: I0625 20:55:07.632783 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="27356319-4322-4d73-98be-e9e2aae5e698" containerName="cilium-operator" Jun 25 20:55:07.633088 kubelet[2684]: I0625 20:55:07.632799 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="88717f99-30cc-4aab-974e-de23ae6b5074" containerName="cilium-agent" Jun 25 20:55:07.661752 systemd[1]: Created slice kubepods-burstable-pod56adb1e0_bc8c_467b_97e4_96cc727a8564.slice - libcontainer container kubepods-burstable-pod56adb1e0_bc8c_467b_97e4_96cc727a8564.slice. Jun 25 20:55:07.683927 kubelet[2684]: I0625 20:55:07.681862 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-bpf-maps\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.683927 kubelet[2684]: I0625 20:55:07.681926 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-cilium-run\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.683927 kubelet[2684]: I0625 20:55:07.681961 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-xtables-lock\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.683927 kubelet[2684]: I0625 20:55:07.681994 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-host-proc-sys-kernel\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.683927 kubelet[2684]: I0625 20:55:07.682025 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-lib-modules\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.683927 kubelet[2684]: I0625 20:55:07.682055 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56adb1e0-bc8c-467b-97e4-96cc727a8564-clustermesh-secrets\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684373 kubelet[2684]: I0625 20:55:07.682090 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/56adb1e0-bc8c-467b-97e4-96cc727a8564-cilium-ipsec-secrets\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684373 kubelet[2684]: I0625 20:55:07.682122 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-cilium-cgroup\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684373 kubelet[2684]: I0625 20:55:07.682157 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-cni-path\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684373 kubelet[2684]: I0625 20:55:07.682210 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-hostproc\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684373 kubelet[2684]: I0625 20:55:07.682249 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-host-proc-sys-net\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684373 kubelet[2684]: I0625 20:55:07.682281 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llpzx\" (UniqueName: \"kubernetes.io/projected/56adb1e0-bc8c-467b-97e4-96cc727a8564-kube-api-access-llpzx\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684595 kubelet[2684]: I0625 20:55:07.682315 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56adb1e0-bc8c-467b-97e4-96cc727a8564-hubble-tls\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684595 kubelet[2684]: I0625 20:55:07.682349 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56adb1e0-bc8c-467b-97e4-96cc727a8564-etc-cni-netd\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.684595 kubelet[2684]: I0625 20:55:07.682380 2684 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56adb1e0-bc8c-467b-97e4-96cc727a8564-cilium-config-path\") pod \"cilium-jzszb\" (UID: \"56adb1e0-bc8c-467b-97e4-96cc727a8564\") " pod="kube-system/cilium-jzszb" Jun 25 20:55:07.752327 sshd[4405]: pam_unix(sshd:session): session closed for user core Jun 25 20:55:07.757319 systemd[1]: sshd@22-10.230.13.114:22-139.178.89.65:54894.service: Deactivated successfully. Jun 25 20:55:07.759889 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 20:55:07.760986 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Jun 25 20:55:07.762784 systemd-logind[1493]: Removed session 25. Jun 25 20:55:07.907131 systemd[1]: Started sshd@23-10.230.13.114:22-139.178.89.65:36128.service - OpenSSH per-connection server daemon (139.178.89.65:36128). Jun 25 20:55:07.967497 containerd[1514]: time="2024-06-25T20:55:07.967433307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jzszb,Uid:56adb1e0-bc8c-467b-97e4-96cc727a8564,Namespace:kube-system,Attempt:0,}" Jun 25 20:55:07.997263 containerd[1514]: time="2024-06-25T20:55:07.996991179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 20:55:07.997263 containerd[1514]: time="2024-06-25T20:55:07.997102137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:55:07.998083 containerd[1514]: time="2024-06-25T20:55:07.997153119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 20:55:07.998337 containerd[1514]: time="2024-06-25T20:55:07.998170562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 20:55:08.023431 systemd[1]: Started cri-containerd-01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5.scope - libcontainer container 01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5. Jun 25 20:55:08.054718 containerd[1514]: time="2024-06-25T20:55:08.054606781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jzszb,Uid:56adb1e0-bc8c-467b-97e4-96cc727a8564,Namespace:kube-system,Attempt:0,} returns sandbox id \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\"" Jun 25 20:55:08.060243 containerd[1514]: time="2024-06-25T20:55:08.060155340Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 20:55:08.073449 containerd[1514]: time="2024-06-25T20:55:08.073383640Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51\"" Jun 25 20:55:08.074429 containerd[1514]: time="2024-06-25T20:55:08.074389920Z" level=info msg="StartContainer for \"57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51\"" Jun 25 20:55:08.110496 systemd[1]: Started cri-containerd-57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51.scope - libcontainer container 57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51. Jun 25 20:55:08.149706 containerd[1514]: time="2024-06-25T20:55:08.149428730Z" level=info msg="StartContainer for \"57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51\" returns successfully" Jun 25 20:55:08.172098 systemd[1]: cri-containerd-57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51.scope: Deactivated successfully. Jun 25 20:55:08.220555 containerd[1514]: time="2024-06-25T20:55:08.220350349Z" level=info msg="shim disconnected" id=57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51 namespace=k8s.io Jun 25 20:55:08.220555 containerd[1514]: time="2024-06-25T20:55:08.220465160Z" level=warning msg="cleaning up after shim disconnected" id=57837ada57fb79bf69d2b85892839c4f5b930d00088a3bdab77f3acf83cc2e51 namespace=k8s.io Jun 25 20:55:08.220555 containerd[1514]: time="2024-06-25T20:55:08.220483092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:08.236942 containerd[1514]: time="2024-06-25T20:55:08.236765652Z" level=warning msg="cleanup warnings time=\"2024-06-25T20:55:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 20:55:08.789774 sshd[4422]: Accepted publickey for core from 139.178.89.65 port 36128 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:55:08.791865 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:55:08.802732 systemd-logind[1493]: New session 26 of user core. Jun 25 20:55:08.820445 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 20:55:08.988478 containerd[1514]: time="2024-06-25T20:55:08.988354794Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 20:55:09.012333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546431947.mount: Deactivated successfully. Jun 25 20:55:09.017581 containerd[1514]: time="2024-06-25T20:55:09.017334951Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7\"" Jun 25 20:55:09.018513 containerd[1514]: time="2024-06-25T20:55:09.018392249Z" level=info msg="StartContainer for \"a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7\"" Jun 25 20:55:09.068037 systemd[1]: Started cri-containerd-a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7.scope - libcontainer container a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7. Jun 25 20:55:09.104548 containerd[1514]: time="2024-06-25T20:55:09.104478266Z" level=info msg="StartContainer for \"a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7\" returns successfully" Jun 25 20:55:09.120049 systemd[1]: cri-containerd-a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7.scope: Deactivated successfully. Jun 25 20:55:09.157227 containerd[1514]: time="2024-06-25T20:55:09.157022126Z" level=info msg="shim disconnected" id=a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7 namespace=k8s.io Jun 25 20:55:09.157227 containerd[1514]: time="2024-06-25T20:55:09.157105687Z" level=warning msg="cleaning up after shim disconnected" id=a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7 namespace=k8s.io Jun 25 20:55:09.157227 containerd[1514]: time="2024-06-25T20:55:09.157124338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:09.398129 sshd[4422]: pam_unix(sshd:session): session closed for user core Jun 25 20:55:09.403100 systemd[1]: sshd@23-10.230.13.114:22-139.178.89.65:36128.service: Deactivated successfully. Jun 25 20:55:09.406103 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 20:55:09.407685 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. Jun 25 20:55:09.409037 systemd-logind[1493]: Removed session 26. Jun 25 20:55:09.559748 systemd[1]: Started sshd@24-10.230.13.114:22-139.178.89.65:36136.service - OpenSSH per-connection server daemon (139.178.89.65:36136). Jun 25 20:55:09.664352 kubelet[2684]: E0625 20:55:09.664131 2684 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 20:55:09.799151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46689c5a0068bef3a719b4019a6e9549217c41eddd61c3f39efc691b87bb6e7-rootfs.mount: Deactivated successfully. Jun 25 20:55:09.992574 containerd[1514]: time="2024-06-25T20:55:09.992017192Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 20:55:10.018816 containerd[1514]: time="2024-06-25T20:55:10.018682376Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2\"" Jun 25 20:55:10.019602 containerd[1514]: time="2024-06-25T20:55:10.019478568Z" level=info msg="StartContainer for \"33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2\"" Jun 25 20:55:10.059392 systemd[1]: Started cri-containerd-33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2.scope - libcontainer container 33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2. Jun 25 20:55:10.100441 containerd[1514]: time="2024-06-25T20:55:10.098831855Z" level=info msg="StartContainer for \"33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2\" returns successfully" Jun 25 20:55:10.109065 systemd[1]: cri-containerd-33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2.scope: Deactivated successfully. Jun 25 20:55:10.146704 containerd[1514]: time="2024-06-25T20:55:10.146556340Z" level=info msg="shim disconnected" id=33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2 namespace=k8s.io Jun 25 20:55:10.146704 containerd[1514]: time="2024-06-25T20:55:10.146629709Z" level=warning msg="cleaning up after shim disconnected" id=33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2 namespace=k8s.io Jun 25 20:55:10.146704 containerd[1514]: time="2024-06-25T20:55:10.146647506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:10.431247 sshd[4593]: Accepted publickey for core from 139.178.89.65 port 36136 ssh2: RSA SHA256:XDH//XtaWDaj0VA2Oe+IvhZQIVSN+HRvWfRbEiyl+ag Jun 25 20:55:10.433260 sshd[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 20:55:10.440206 systemd-logind[1493]: New session 27 of user core. Jun 25 20:55:10.445405 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 20:55:10.798701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33efe4171251c2cfdd56ec8326ba5916135132b19b43ce3fb8a506188ff99ed2-rootfs.mount: Deactivated successfully. Jun 25 20:55:11.004965 containerd[1514]: time="2024-06-25T20:55:11.004897450Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 20:55:11.032874 containerd[1514]: time="2024-06-25T20:55:11.032810840Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598\"" Jun 25 20:55:11.034968 containerd[1514]: time="2024-06-25T20:55:11.034838898Z" level=info msg="StartContainer for \"2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598\"" Jun 25 20:55:11.095392 systemd[1]: Started cri-containerd-2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598.scope - libcontainer container 2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598. Jun 25 20:55:11.147075 containerd[1514]: time="2024-06-25T20:55:11.147013244Z" level=info msg="StartContainer for \"2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598\" returns successfully" Jun 25 20:55:11.151636 systemd[1]: cri-containerd-2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598.scope: Deactivated successfully. Jun 25 20:55:11.214830 containerd[1514]: time="2024-06-25T20:55:11.214731457Z" level=info msg="shim disconnected" id=2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598 namespace=k8s.io Jun 25 20:55:11.215413 containerd[1514]: time="2024-06-25T20:55:11.214887626Z" level=warning msg="cleaning up after shim disconnected" id=2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598 namespace=k8s.io Jun 25 20:55:11.215413 containerd[1514]: time="2024-06-25T20:55:11.214908868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 20:55:11.798996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2982f4511181155ce85ae57c9dd9df2fdc0dcfb0028264e8f557fcc314f62598-rootfs.mount: Deactivated successfully. Jun 25 20:55:12.008310 containerd[1514]: time="2024-06-25T20:55:12.007965167Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 20:55:12.033939 containerd[1514]: time="2024-06-25T20:55:12.033862788Z" level=info msg="CreateContainer within sandbox \"01cf360352c0c8e6a5d2d09c5870333abeffd3ed650d911d9c3c841786c1f7c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2347c325fd4903ac2671556dd103646b4b2da73323ef71ac8c7bffa37d6e9ee0\"" Jun 25 20:55:12.036165 containerd[1514]: time="2024-06-25T20:55:12.036076949Z" level=info msg="StartContainer for \"2347c325fd4903ac2671556dd103646b4b2da73323ef71ac8c7bffa37d6e9ee0\"" Jun 25 20:55:12.082400 systemd[1]: Started cri-containerd-2347c325fd4903ac2671556dd103646b4b2da73323ef71ac8c7bffa37d6e9ee0.scope - libcontainer container 2347c325fd4903ac2671556dd103646b4b2da73323ef71ac8c7bffa37d6e9ee0. Jun 25 20:55:12.128590 containerd[1514]: time="2024-06-25T20:55:12.128514045Z" level=info msg="StartContainer for \"2347c325fd4903ac2671556dd103646b4b2da73323ef71ac8c7bffa37d6e9ee0\" returns successfully" Jun 25 20:55:12.806007 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 25 20:55:16.497337 systemd-networkd[1442]: lxc_health: Link UP Jun 25 20:55:16.499824 systemd-networkd[1442]: lxc_health: Gained carrier Jun 25 20:55:17.993053 kubelet[2684]: I0625 20:55:17.992995 2684 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jzszb" podStartSLOduration=10.992891067 podStartE2EDuration="10.992891067s" podCreationTimestamp="2024-06-25 20:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 20:55:13.045478594 +0000 UTC m=+148.859731279" watchObservedRunningTime="2024-06-25 20:55:17.992891067 +0000 UTC m=+153.807143758" Jun 25 20:55:18.366471 systemd-networkd[1442]: lxc_health: Gained IPv6LL Jun 25 20:55:20.110836 systemd[1]: run-containerd-runc-k8s.io-2347c325fd4903ac2671556dd103646b4b2da73323ef71ac8c7bffa37d6e9ee0-runc.ef7wtr.mount: Deactivated successfully. Jun 25 20:55:22.539981 sshd[4593]: pam_unix(sshd:session): session closed for user core Jun 25 20:55:22.544673 systemd[1]: sshd@24-10.230.13.114:22-139.178.89.65:36136.service: Deactivated successfully. Jun 25 20:55:22.549041 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 20:55:22.552859 systemd-logind[1493]: Session 27 logged out. Waiting for processes to exit. Jun 25 20:55:22.556564 systemd-logind[1493]: Removed session 27.