Nov 13 09:23:47.026705 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 13 09:23:47.026759 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 09:23:47.026773 kernel: BIOS-provided physical RAM map: Nov 13 09:23:47.026789 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 13 09:23:47.026798 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 13 09:23:47.026807 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 13 09:23:47.026818 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 13 09:23:47.026828 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 13 09:23:47.026837 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 13 09:23:47.026847 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 13 09:23:47.026857 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 13 09:23:47.026866 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 13 09:23:47.026881 kernel: NX (Execute Disable) protection: active Nov 13 09:23:47.026891 kernel: APIC: Static calls initialized Nov 13 09:23:47.026903 kernel: SMBIOS 2.8 present. Nov 13 09:23:47.026914 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 13 09:23:47.026924 kernel: Hypervisor detected: KVM Nov 13 09:23:47.026939 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 13 09:23:47.026950 kernel: kvm-clock: using sched offset of 4462782324 cycles Nov 13 09:23:47.026962 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 13 09:23:47.026973 kernel: tsc: Detected 2799.998 MHz processor Nov 13 09:23:47.026984 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 13 09:23:47.026995 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 13 09:23:47.027005 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 13 09:23:47.027016 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 13 09:23:47.027027 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 13 09:23:47.027042 kernel: Using GB pages for direct mapping Nov 13 09:23:47.027053 kernel: ACPI: Early table checksum verification disabled Nov 13 09:23:47.027063 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 13 09:23:47.027074 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027085 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027095 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027106 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 13 09:23:47.027117 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027127 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027143 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027153 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 09:23:47.027164 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 13 09:23:47.027175 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 13 09:23:47.027186 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 13 09:23:47.027203 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 13 09:23:47.027214 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 13 09:23:47.027229 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 13 09:23:47.027241 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 13 09:23:47.027252 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 13 09:23:47.027263 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 13 09:23:47.027274 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 13 09:23:47.027285 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 13 09:23:47.027296 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 13 09:23:47.027311 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 13 09:23:47.027335 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 13 09:23:47.027604 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 13 09:23:47.027618 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 13 09:23:47.027629 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 13 09:23:47.027640 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 13 09:23:47.027651 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 13 09:23:47.027662 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 13 09:23:47.027673 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 13 09:23:47.027684 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 13 09:23:47.027702 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 13 09:23:47.027714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 13 09:23:47.027725 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 13 09:23:47.027736 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 13 09:23:47.027748 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 13 09:23:47.027759 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 13 09:23:47.027770 kernel: Zone ranges: Nov 13 09:23:47.027782 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 13 09:23:47.027793 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 13 09:23:47.027809 kernel: Normal empty Nov 13 09:23:47.027820 kernel: Movable zone start for each node Nov 13 09:23:47.027831 kernel: Early memory node ranges Nov 13 09:23:47.027842 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 13 09:23:47.027853 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 13 09:23:47.027865 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 13 09:23:47.027876 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 13 09:23:47.027887 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 13 09:23:47.027898 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 13 09:23:47.027909 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 13 09:23:47.027925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 13 09:23:47.027936 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 13 09:23:47.027948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 13 09:23:47.027959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 13 09:23:47.027970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 13 09:23:47.027981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 13 09:23:47.027992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 13 09:23:47.028003 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 13 09:23:47.028014 kernel: TSC deadline timer available Nov 13 09:23:47.028030 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 13 09:23:47.028042 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 13 09:23:47.028053 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 13 09:23:47.028064 kernel: Booting paravirtualized kernel on KVM Nov 13 09:23:47.028075 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 13 09:23:47.028086 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 13 09:23:47.028110 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 13 09:23:47.028128 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 13 09:23:47.028139 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 13 09:23:47.028156 kernel: kvm-guest: PV spinlocks enabled Nov 13 09:23:47.028167 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 13 09:23:47.028179 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 09:23:47.028191 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 13 09:23:47.028202 kernel: random: crng init done Nov 13 09:23:47.028213 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 13 09:23:47.028224 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 13 09:23:47.028235 kernel: Fallback order for Node 0: 0 Nov 13 09:23:47.028251 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 13 09:23:47.028263 kernel: Policy zone: DMA32 Nov 13 09:23:47.028274 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 13 09:23:47.028285 kernel: software IO TLB: area num 16. Nov 13 09:23:47.028297 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 194828K reserved, 0K cma-reserved) Nov 13 09:23:47.028309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 13 09:23:47.028330 kernel: Kernel/User page tables isolation: enabled Nov 13 09:23:47.028368 kernel: ftrace: allocating 37801 entries in 148 pages Nov 13 09:23:47.028381 kernel: ftrace: allocated 148 pages with 3 groups Nov 13 09:23:47.028399 kernel: Dynamic Preempt: voluntary Nov 13 09:23:47.028410 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 13 09:23:47.028422 kernel: rcu: RCU event tracing is enabled. Nov 13 09:23:47.028434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 13 09:23:47.028445 kernel: Trampoline variant of Tasks RCU enabled. Nov 13 09:23:47.028469 kernel: Rude variant of Tasks RCU enabled. Nov 13 09:23:47.028485 kernel: Tracing variant of Tasks RCU enabled. Nov 13 09:23:47.028497 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 13 09:23:47.028509 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 13 09:23:47.028521 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 13 09:23:47.028533 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 13 09:23:47.028544 kernel: Console: colour VGA+ 80x25 Nov 13 09:23:47.028561 kernel: printk: console [tty0] enabled Nov 13 09:23:47.028573 kernel: printk: console [ttyS0] enabled Nov 13 09:23:47.028585 kernel: ACPI: Core revision 20230628 Nov 13 09:23:47.028597 kernel: APIC: Switch to symmetric I/O mode setup Nov 13 09:23:47.028608 kernel: x2apic enabled Nov 13 09:23:47.028624 kernel: APIC: Switched APIC routing to: physical x2apic Nov 13 09:23:47.028637 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Nov 13 09:23:47.028649 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Nov 13 09:23:47.028660 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 13 09:23:47.028672 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 13 09:23:47.028684 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 13 09:23:47.028695 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 13 09:23:47.028719 kernel: Spectre V2 : Mitigation: Retpolines Nov 13 09:23:47.028731 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 13 09:23:47.028747 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 13 09:23:47.028758 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 13 09:23:47.028770 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 13 09:23:47.028781 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 13 09:23:47.028805 kernel: MDS: Mitigation: Clear CPU buffers Nov 13 09:23:47.028816 kernel: MMIO Stale Data: Unknown: No mitigations Nov 13 09:23:47.028828 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 13 09:23:47.028839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 13 09:23:47.028851 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 13 09:23:47.028863 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 13 09:23:47.028874 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 13 09:23:47.028891 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 13 09:23:47.028903 kernel: Freeing SMP alternatives memory: 32K Nov 13 09:23:47.028914 kernel: pid_max: default: 32768 minimum: 301 Nov 13 09:23:47.028926 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 13 09:23:47.028937 kernel: landlock: Up and running. Nov 13 09:23:47.028949 kernel: SELinux: Initializing. Nov 13 09:23:47.028961 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 09:23:47.028972 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 09:23:47.028984 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 13 09:23:47.028996 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 09:23:47.029008 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 09:23:47.029025 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 09:23:47.029037 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 13 09:23:47.029049 kernel: signal: max sigframe size: 1776 Nov 13 09:23:47.029060 kernel: rcu: Hierarchical SRCU implementation. Nov 13 09:23:47.029072 kernel: rcu: Max phase no-delay instances is 400. Nov 13 09:23:47.029084 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 13 09:23:47.029096 kernel: smp: Bringing up secondary CPUs ... Nov 13 09:23:47.029108 kernel: smpboot: x86: Booting SMP configuration: Nov 13 09:23:47.029119 kernel: .... node #0, CPUs: #1 Nov 13 09:23:47.029136 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 13 09:23:47.029147 kernel: smp: Brought up 1 node, 2 CPUs Nov 13 09:23:47.029159 kernel: smpboot: Max logical packages: 16 Nov 13 09:23:47.029171 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Nov 13 09:23:47.029182 kernel: devtmpfs: initialized Nov 13 09:23:47.029194 kernel: x86/mm: Memory block size: 128MB Nov 13 09:23:47.029206 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 13 09:23:47.029218 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 13 09:23:47.029230 kernel: pinctrl core: initialized pinctrl subsystem Nov 13 09:23:47.029246 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 13 09:23:47.029258 kernel: audit: initializing netlink subsys (disabled) Nov 13 09:23:47.029270 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 13 09:23:47.029281 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 13 09:23:47.029293 kernel: audit: type=2000 audit(1731489825.774:1): state=initialized audit_enabled=0 res=1 Nov 13 09:23:47.029304 kernel: cpuidle: using governor menu Nov 13 09:23:47.029326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 13 09:23:47.029434 kernel: dca service started, version 1.12.1 Nov 13 09:23:47.029453 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 13 09:23:47.029471 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 13 09:23:47.029483 kernel: PCI: Using configuration type 1 for base access Nov 13 09:23:47.029496 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 13 09:23:47.029508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 13 09:23:47.029520 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 13 09:23:47.029532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 13 09:23:47.029544 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 13 09:23:47.029556 kernel: ACPI: Added _OSI(Module Device) Nov 13 09:23:47.029568 kernel: ACPI: Added _OSI(Processor Device) Nov 13 09:23:47.029585 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 13 09:23:47.029597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 13 09:23:47.029609 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 13 09:23:47.029621 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 13 09:23:47.029633 kernel: ACPI: Interpreter enabled Nov 13 09:23:47.029645 kernel: ACPI: PM: (supports S0 S5) Nov 13 09:23:47.029656 kernel: ACPI: Using IOAPIC for interrupt routing Nov 13 09:23:47.029668 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 13 09:23:47.029681 kernel: PCI: Using E820 reservations for host bridge windows Nov 13 09:23:47.029697 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 13 09:23:47.029709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 13 09:23:47.030020 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 13 09:23:47.030191 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 13 09:23:47.030381 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 13 09:23:47.030400 kernel: PCI host bridge to bus 0000:00 Nov 13 09:23:47.030583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 13 09:23:47.030743 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 13 09:23:47.030890 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 13 09:23:47.031036 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 13 09:23:47.031181 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 13 09:23:47.031337 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 13 09:23:47.031513 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 13 09:23:47.031700 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 13 09:23:47.031881 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 13 09:23:47.032045 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 13 09:23:47.032205 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 13 09:23:47.032392 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 13 09:23:47.032554 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 13 09:23:47.032742 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.032914 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 13 09:23:47.033099 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.033260 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 13 09:23:47.033470 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.033636 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 13 09:23:47.033828 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.034002 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 13 09:23:47.034174 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.034364 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 13 09:23:47.034543 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.034704 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 13 09:23:47.034876 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.035045 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 13 09:23:47.035217 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 13 09:23:47.035414 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 13 09:23:47.035588 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 13 09:23:47.035750 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 13 09:23:47.035910 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 13 09:23:47.036069 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 13 09:23:47.036239 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 13 09:23:47.036446 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 13 09:23:47.036609 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 13 09:23:47.036769 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 13 09:23:47.036928 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 13 09:23:47.037105 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 13 09:23:47.037266 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 13 09:23:47.037486 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 13 09:23:47.037648 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 13 09:23:47.037807 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 13 09:23:47.037977 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 13 09:23:47.038142 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 13 09:23:47.039156 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 13 09:23:47.039378 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 13 09:23:47.039545 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 13 09:23:47.039704 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 13 09:23:47.039859 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 09:23:47.040032 kernel: pci_bus 0000:02: extended config space not accessible Nov 13 09:23:47.040213 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 13 09:23:47.040421 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 13 09:23:47.040588 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 13 09:23:47.040749 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 13 09:23:47.040928 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 13 09:23:47.041101 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 13 09:23:47.041267 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 13 09:23:47.041579 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 13 09:23:47.041747 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 09:23:47.041923 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 13 09:23:47.042086 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 13 09:23:47.042249 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 13 09:23:47.042437 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 13 09:23:47.042593 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 09:23:47.042758 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 13 09:23:47.042913 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 13 09:23:47.043086 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 09:23:47.043248 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 13 09:23:47.043440 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 13 09:23:47.043600 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 09:23:47.043763 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 13 09:23:47.043921 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 13 09:23:47.044100 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 09:23:47.044262 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 13 09:23:47.044466 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 13 09:23:47.044679 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 09:23:47.044851 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 13 09:23:47.045012 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 13 09:23:47.045196 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 09:23:47.045216 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 13 09:23:47.045229 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 13 09:23:47.045241 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 13 09:23:47.045261 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 13 09:23:47.045274 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 13 09:23:47.045286 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 13 09:23:47.045298 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 13 09:23:47.045310 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 13 09:23:47.045335 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 13 09:23:47.045423 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 13 09:23:47.045437 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 13 09:23:47.045450 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 13 09:23:47.045469 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 13 09:23:47.045481 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 13 09:23:47.045493 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 13 09:23:47.045505 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 13 09:23:47.045517 kernel: iommu: Default domain type: Translated Nov 13 09:23:47.045529 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 13 09:23:47.045541 kernel: PCI: Using ACPI for IRQ routing Nov 13 09:23:47.045554 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 13 09:23:47.045566 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 13 09:23:47.045583 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 13 09:23:47.045745 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 13 09:23:47.045901 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 13 09:23:47.046055 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 13 09:23:47.046074 kernel: vgaarb: loaded Nov 13 09:23:47.046086 kernel: clocksource: Switched to clocksource kvm-clock Nov 13 09:23:47.046098 kernel: VFS: Disk quotas dquot_6.6.0 Nov 13 09:23:47.046111 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 13 09:23:47.046122 kernel: pnp: PnP ACPI init Nov 13 09:23:47.048242 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 13 09:23:47.048269 kernel: pnp: PnP ACPI: found 5 devices Nov 13 09:23:47.048283 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 13 09:23:47.048296 kernel: NET: Registered PF_INET protocol family Nov 13 09:23:47.048309 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 13 09:23:47.048335 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 13 09:23:47.048366 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 13 09:23:47.048379 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 13 09:23:47.048401 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 13 09:23:47.048413 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 13 09:23:47.048425 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 09:23:47.048438 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 09:23:47.048450 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 13 09:23:47.048462 kernel: NET: Registered PF_XDP protocol family Nov 13 09:23:47.048635 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 13 09:23:47.048804 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 13 09:23:47.048979 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 13 09:23:47.049145 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 13 09:23:47.049308 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 13 09:23:47.050189 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 13 09:23:47.051912 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 13 09:23:47.052105 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 13 09:23:47.052295 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 13 09:23:47.052529 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 13 09:23:47.052693 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 13 09:23:47.052866 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 13 09:23:47.053031 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 13 09:23:47.053193 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 13 09:23:47.055196 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 13 09:23:47.055478 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 13 09:23:47.055685 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 13 09:23:47.055859 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 13 09:23:47.056036 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 13 09:23:47.056211 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 13 09:23:47.057563 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 13 09:23:47.057732 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 09:23:47.057898 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 13 09:23:47.058056 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 13 09:23:47.058225 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 13 09:23:47.059497 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 09:23:47.059670 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 13 09:23:47.059830 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 13 09:23:47.059989 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 13 09:23:47.060158 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 09:23:47.063390 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 13 09:23:47.063637 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 13 09:23:47.063799 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 13 09:23:47.063957 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 09:23:47.064123 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 13 09:23:47.064282 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 13 09:23:47.066527 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 13 09:23:47.066708 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 09:23:47.066882 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 13 09:23:47.067061 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 13 09:23:47.067227 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 13 09:23:47.067426 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 09:23:47.067596 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 13 09:23:47.067758 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 13 09:23:47.067929 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 13 09:23:47.068091 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 09:23:47.068258 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 13 09:23:47.070522 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 13 09:23:47.070707 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 13 09:23:47.070876 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 09:23:47.071038 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 13 09:23:47.071189 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 13 09:23:47.072392 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 13 09:23:47.072566 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 13 09:23:47.072714 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 13 09:23:47.072858 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 13 09:23:47.073034 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 13 09:23:47.073185 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 13 09:23:47.074443 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 09:23:47.074620 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 13 09:23:47.074795 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 13 09:23:47.074947 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 13 09:23:47.075096 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 09:23:47.075258 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 13 09:23:47.075463 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 13 09:23:47.075613 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 09:23:47.075782 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 13 09:23:47.075931 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 13 09:23:47.076078 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 09:23:47.076247 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 13 09:23:47.078462 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 13 09:23:47.078625 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 09:23:47.078791 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 13 09:23:47.078951 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 13 09:23:47.079109 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 09:23:47.079273 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 13 09:23:47.085628 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 13 09:23:47.085831 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 09:23:47.086006 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 13 09:23:47.086161 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 13 09:23:47.086412 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 09:23:47.086435 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 13 09:23:47.086449 kernel: PCI: CLS 0 bytes, default 64 Nov 13 09:23:47.086462 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 13 09:23:47.086475 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 13 09:23:47.086488 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 13 09:23:47.086501 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Nov 13 09:23:47.086513 kernel: Initialise system trusted keyrings Nov 13 09:23:47.086535 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 13 09:23:47.086549 kernel: Key type asymmetric registered Nov 13 09:23:47.086561 kernel: Asymmetric key parser 'x509' registered Nov 13 09:23:47.086574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 13 09:23:47.086587 kernel: io scheduler mq-deadline registered Nov 13 09:23:47.086600 kernel: io scheduler kyber registered Nov 13 09:23:47.086612 kernel: io scheduler bfq registered Nov 13 09:23:47.086788 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 13 09:23:47.086952 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 13 09:23:47.087120 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.087286 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 13 09:23:47.087474 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 13 09:23:47.087633 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.087796 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 13 09:23:47.087961 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 13 09:23:47.088131 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.088299 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 13 09:23:47.088509 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 13 09:23:47.088677 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.088844 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 13 09:23:47.089006 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 13 09:23:47.089183 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.089397 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 13 09:23:47.089557 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 13 09:23:47.089714 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.089875 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 13 09:23:47.090030 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 13 09:23:47.090196 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.091465 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 13 09:23:47.091637 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 13 09:23:47.091796 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 09:23:47.091817 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 13 09:23:47.091832 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 13 09:23:47.091855 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 13 09:23:47.091868 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 13 09:23:47.091881 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 13 09:23:47.091894 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 13 09:23:47.091907 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 13 09:23:47.091920 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 13 09:23:47.092093 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 13 09:23:47.092245 kernel: rtc_cmos 00:03: registered as rtc0 Nov 13 09:23:47.095482 kernel: rtc_cmos 00:03: setting system clock to 2024-11-13T09:23:46 UTC (1731489826) Nov 13 09:23:47.095651 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 13 09:23:47.095672 kernel: intel_pstate: CPU model not supported Nov 13 09:23:47.095686 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 13 09:23:47.095710 kernel: NET: Registered PF_INET6 protocol family Nov 13 09:23:47.095724 kernel: Segment Routing with IPv6 Nov 13 09:23:47.095737 kernel: In-situ OAM (IOAM) with IPv6 Nov 13 09:23:47.095751 kernel: NET: Registered PF_PACKET protocol family Nov 13 09:23:47.095763 kernel: Key type dns_resolver registered Nov 13 09:23:47.095782 kernel: IPI shorthand broadcast: enabled Nov 13 09:23:47.095795 kernel: sched_clock: Marking stable (1154004103, 225924001)->(1610730673, -230802569) Nov 13 09:23:47.095807 kernel: registered taskstats version 1 Nov 13 09:23:47.095820 kernel: Loading compiled-in X.509 certificates Nov 13 09:23:47.095833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 13 09:23:47.095846 kernel: Key type .fscrypt registered Nov 13 09:23:47.095858 kernel: Key type fscrypt-provisioning registered Nov 13 09:23:47.095870 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 13 09:23:47.095883 kernel: ima: Allocated hash algorithm: sha1 Nov 13 09:23:47.095901 kernel: ima: No architecture policies found Nov 13 09:23:47.095914 kernel: clk: Disabling unused clocks Nov 13 09:23:47.095926 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 13 09:23:47.095939 kernel: Write protecting the kernel read-only data: 36864k Nov 13 09:23:47.095952 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 13 09:23:47.095965 kernel: Run /init as init process Nov 13 09:23:47.095978 kernel: with arguments: Nov 13 09:23:47.095991 kernel: /init Nov 13 09:23:47.096003 kernel: with environment: Nov 13 09:23:47.096020 kernel: HOME=/ Nov 13 09:23:47.096033 kernel: TERM=linux Nov 13 09:23:47.096045 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 13 09:23:47.096070 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 09:23:47.096088 systemd[1]: Detected virtualization kvm. Nov 13 09:23:47.096102 systemd[1]: Detected architecture x86-64. Nov 13 09:23:47.096115 systemd[1]: Running in initrd. Nov 13 09:23:47.096128 systemd[1]: No hostname configured, using default hostname. Nov 13 09:23:47.096148 systemd[1]: Hostname set to . Nov 13 09:23:47.096162 systemd[1]: Initializing machine ID from VM UUID. Nov 13 09:23:47.096175 systemd[1]: Queued start job for default target initrd.target. Nov 13 09:23:47.096189 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 09:23:47.096202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 09:23:47.096217 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 13 09:23:47.096230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 09:23:47.096249 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 13 09:23:47.096263 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 13 09:23:47.096279 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 13 09:23:47.096293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 13 09:23:47.096306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 09:23:47.096332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 09:23:47.096361 systemd[1]: Reached target paths.target - Path Units. Nov 13 09:23:47.096382 systemd[1]: Reached target slices.target - Slice Units. Nov 13 09:23:47.096396 systemd[1]: Reached target swap.target - Swaps. Nov 13 09:23:47.096410 systemd[1]: Reached target timers.target - Timer Units. Nov 13 09:23:47.096424 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 09:23:47.096437 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 09:23:47.096451 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 09:23:47.096465 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 09:23:47.096479 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 09:23:47.096492 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 09:23:47.096511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 09:23:47.096525 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 09:23:47.096538 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 13 09:23:47.096552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 09:23:47.096565 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 13 09:23:47.096579 systemd[1]: Starting systemd-fsck-usr.service... Nov 13 09:23:47.096593 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 09:23:47.096607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 09:23:47.096620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 09:23:47.096705 systemd-journald[201]: Collecting audit messages is disabled. Nov 13 09:23:47.096739 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 13 09:23:47.096753 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 09:23:47.096773 systemd[1]: Finished systemd-fsck-usr.service. Nov 13 09:23:47.096793 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 09:23:47.096809 systemd-journald[201]: Journal started Nov 13 09:23:47.096841 systemd-journald[201]: Runtime Journal (/run/log/journal/e6aefc086c7a4721824b5d28f14f9dfd) is 4.7M, max 38.0M, 33.2M free. Nov 13 09:23:47.053619 systemd-modules-load[202]: Inserted module 'overlay' Nov 13 09:23:47.134851 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 13 09:23:47.134885 kernel: Bridge firewalling registered Nov 13 09:23:47.134903 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 09:23:47.107015 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 13 09:23:47.137009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 09:23:47.138968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 09:23:47.146603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 09:23:47.150528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 09:23:47.158544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 09:23:47.161502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 09:23:47.176553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 09:23:47.178717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 09:23:47.183985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 09:23:47.186553 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 13 09:23:47.199536 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 09:23:47.201724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 09:23:47.211553 dracut-cmdline[234]: dracut-dracut-053 Nov 13 09:23:47.216202 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 09:23:47.215455 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 09:23:47.261590 systemd-resolved[241]: Positive Trust Anchors: Nov 13 09:23:47.261629 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 09:23:47.261674 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 09:23:47.266743 systemd-resolved[241]: Defaulting to hostname 'linux'. Nov 13 09:23:47.269440 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 09:23:47.270292 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 09:23:47.323409 kernel: SCSI subsystem initialized Nov 13 09:23:47.335403 kernel: Loading iSCSI transport class v2.0-870. Nov 13 09:23:47.347377 kernel: iscsi: registered transport (tcp) Nov 13 09:23:47.373448 kernel: iscsi: registered transport (qla4xxx) Nov 13 09:23:47.373557 kernel: QLogic iSCSI HBA Driver Nov 13 09:23:47.427958 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 13 09:23:47.435548 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 13 09:23:47.467313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 13 09:23:47.467441 kernel: device-mapper: uevent: version 1.0.3 Nov 13 09:23:47.470386 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 13 09:23:47.517417 kernel: raid6: sse2x4 gen() 7979 MB/s Nov 13 09:23:47.536435 kernel: raid6: sse2x2 gen() 5627 MB/s Nov 13 09:23:47.555019 kernel: raid6: sse2x1 gen() 5518 MB/s Nov 13 09:23:47.555139 kernel: raid6: using algorithm sse2x4 gen() 7979 MB/s Nov 13 09:23:47.574061 kernel: raid6: .... xor() 5325 MB/s, rmw enabled Nov 13 09:23:47.574230 kernel: raid6: using ssse3x2 recovery algorithm Nov 13 09:23:47.600408 kernel: xor: automatically using best checksumming function avx Nov 13 09:23:47.784415 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 13 09:23:47.800389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 13 09:23:47.807595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 09:23:47.840305 systemd-udevd[420]: Using default interface naming scheme 'v255'. Nov 13 09:23:47.847524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 09:23:47.854559 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 13 09:23:47.876601 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Nov 13 09:23:47.920165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 09:23:47.927558 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 09:23:48.042246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 09:23:48.052784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 13 09:23:48.076013 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 13 09:23:48.082618 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 09:23:48.083829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 09:23:48.086691 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 09:23:48.094518 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 13 09:23:48.120115 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 13 09:23:48.177420 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 13 09:23:48.256492 kernel: cryptd: max_cpu_qlen set to 1000 Nov 13 09:23:48.256522 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 13 09:23:48.256771 kernel: AVX version of gcm_enc/dec engaged. Nov 13 09:23:48.256838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 13 09:23:48.256858 kernel: GPT:17805311 != 125829119 Nov 13 09:23:48.256876 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 13 09:23:48.256892 kernel: GPT:17805311 != 125829119 Nov 13 09:23:48.256908 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 13 09:23:48.256925 kernel: AES CTR mode by8 optimization enabled Nov 13 09:23:48.256942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 09:23:48.256959 kernel: libata version 3.00 loaded. Nov 13 09:23:48.217002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 09:23:48.217210 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 09:23:48.262312 kernel: ACPI: bus type USB registered Nov 13 09:23:48.219411 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 09:23:48.220133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 09:23:48.282460 kernel: usbcore: registered new interface driver usbfs Nov 13 09:23:48.282493 kernel: usbcore: registered new interface driver hub Nov 13 09:23:48.282512 kernel: usbcore: registered new device driver usb Nov 13 09:23:48.220331 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 09:23:48.222111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 09:23:48.241616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 09:23:48.308941 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (466) Nov 13 09:23:48.318683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 13 09:23:48.443813 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) Nov 13 09:23:48.443848 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 13 09:23:48.444140 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 13 09:23:48.444439 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 13 09:23:48.444689 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 13 09:23:48.444942 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 13 09:23:48.445174 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 13 09:23:48.445410 kernel: ahci 0000:00:1f.2: version 3.0 Nov 13 09:23:48.445637 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 13 09:23:48.445670 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 13 09:23:48.445898 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 13 09:23:48.446131 kernel: hub 1-0:1.0: USB hub found Nov 13 09:23:48.446458 kernel: hub 1-0:1.0: 4 ports detected Nov 13 09:23:48.446683 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 13 09:23:48.447003 kernel: hub 2-0:1.0: USB hub found Nov 13 09:23:48.447257 kernel: hub 2-0:1.0: 4 ports detected Nov 13 09:23:48.447535 kernel: scsi host0: ahci Nov 13 09:23:48.447757 kernel: scsi host1: ahci Nov 13 09:23:48.447993 kernel: scsi host2: ahci Nov 13 09:23:48.448201 kernel: scsi host3: ahci Nov 13 09:23:48.448445 kernel: scsi host4: ahci Nov 13 09:23:48.448658 kernel: scsi host5: ahci Nov 13 09:23:48.448892 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Nov 13 09:23:48.448913 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Nov 13 09:23:48.448931 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Nov 13 09:23:48.448947 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Nov 13 09:23:48.448964 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Nov 13 09:23:48.448981 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Nov 13 09:23:48.449129 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 13 09:23:48.450379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 09:23:48.457629 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 13 09:23:48.458542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 13 09:23:48.471481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 09:23:48.483596 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 13 09:23:48.489652 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 09:23:48.495700 disk-uuid[560]: Primary Header is updated. Nov 13 09:23:48.495700 disk-uuid[560]: Secondary Entries is updated. Nov 13 09:23:48.495700 disk-uuid[560]: Secondary Header is updated. Nov 13 09:23:48.502380 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 09:23:48.525305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 09:23:48.608740 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 13 09:23:48.692385 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 13 09:23:48.692516 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 13 09:23:48.694619 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 13 09:23:48.697357 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 13 09:23:48.699702 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 13 09:23:48.699741 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 13 09:23:48.750841 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 13 09:23:48.756837 kernel: usbcore: registered new interface driver usbhid Nov 13 09:23:48.756904 kernel: usbhid: USB HID core driver Nov 13 09:23:48.764566 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 13 09:23:48.764645 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 13 09:23:49.516435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 09:23:49.517843 disk-uuid[561]: The operation has completed successfully. Nov 13 09:23:49.579453 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 13 09:23:49.579636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 13 09:23:49.592634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 13 09:23:49.597575 sh[583]: Success Nov 13 09:23:49.615368 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Nov 13 09:23:49.689416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 13 09:23:49.691461 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 13 09:23:49.700506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 13 09:23:49.717912 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 13 09:23:49.717991 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 13 09:23:49.720038 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 13 09:23:49.723485 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 13 09:23:49.723523 kernel: BTRFS info (device dm-0): using free space tree Nov 13 09:23:49.734540 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 13 09:23:49.736097 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 13 09:23:49.745585 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 13 09:23:49.749208 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 13 09:23:49.764841 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 09:23:49.764918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 09:23:49.764949 kernel: BTRFS info (device vda6): using free space tree Nov 13 09:23:49.772863 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 09:23:49.787214 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 13 09:23:49.788110 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 09:23:49.795436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 13 09:23:49.801612 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 13 09:23:49.917136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 09:23:49.931179 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 09:23:49.943575 ignition[686]: Ignition 2.20.0 Nov 13 09:23:49.943898 ignition[686]: Stage: fetch-offline Nov 13 09:23:49.945799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 09:23:49.943989 ignition[686]: no configs at "/usr/lib/ignition/base.d" Nov 13 09:23:49.944007 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:23:49.944193 ignition[686]: parsed url from cmdline: "" Nov 13 09:23:49.944200 ignition[686]: no config URL provided Nov 13 09:23:49.944210 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 09:23:49.944227 ignition[686]: no config at "/usr/lib/ignition/user.ign" Nov 13 09:23:49.944243 ignition[686]: failed to fetch config: resource requires networking Nov 13 09:23:49.944528 ignition[686]: Ignition finished successfully Nov 13 09:23:49.971847 systemd-networkd[770]: lo: Link UP Nov 13 09:23:49.971864 systemd-networkd[770]: lo: Gained carrier Nov 13 09:23:49.974228 systemd-networkd[770]: Enumeration completed Nov 13 09:23:49.974827 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 09:23:49.974833 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 09:23:49.976057 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 09:23:49.976289 systemd-networkd[770]: eth0: Link UP Nov 13 09:23:49.976295 systemd-networkd[770]: eth0: Gained carrier Nov 13 09:23:49.976305 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 09:23:49.976922 systemd[1]: Reached target network.target - Network. Nov 13 09:23:49.984588 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 13 09:23:50.001736 systemd-networkd[770]: eth0: DHCPv4 address 10.230.76.174/30, gateway 10.230.76.173 acquired from 10.230.76.173 Nov 13 09:23:50.002519 ignition[773]: Ignition 2.20.0 Nov 13 09:23:50.002532 ignition[773]: Stage: fetch Nov 13 09:23:50.002778 ignition[773]: no configs at "/usr/lib/ignition/base.d" Nov 13 09:23:50.002798 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:23:50.002939 ignition[773]: parsed url from cmdline: "" Nov 13 09:23:50.002946 ignition[773]: no config URL provided Nov 13 09:23:50.002956 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 09:23:50.002971 ignition[773]: no config at "/usr/lib/ignition/user.ign" Nov 13 09:23:50.003143 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 13 09:23:50.003485 ignition[773]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 13 09:23:50.003528 ignition[773]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 13 09:23:50.003548 ignition[773]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 13 09:23:50.203764 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Nov 13 09:23:50.290206 ignition[773]: GET result: OK Nov 13 09:23:50.290428 ignition[773]: parsing config with SHA512: 76780193df2a0beae1532c04a101efac86e57be633df0a99be4a645106bc1ce9bbe71cd36ec541fbdcedfb9e7f57795cc1f906557ff680474a50991d3eecba16 Nov 13 09:23:50.296593 unknown[773]: fetched base config from "system" Nov 13 09:23:50.296610 unknown[773]: fetched base config from "system" Nov 13 09:23:50.297051 ignition[773]: fetch: fetch complete Nov 13 09:23:50.296619 unknown[773]: fetched user config from "openstack" Nov 13 09:23:50.297059 ignition[773]: fetch: fetch passed Nov 13 09:23:50.299453 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 13 09:23:50.297132 ignition[773]: Ignition finished successfully Nov 13 09:23:50.314676 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 13 09:23:50.334320 ignition[780]: Ignition 2.20.0 Nov 13 09:23:50.334355 ignition[780]: Stage: kargs Nov 13 09:23:50.334600 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 13 09:23:50.334635 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:23:50.337099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 13 09:23:50.335797 ignition[780]: kargs: kargs passed Nov 13 09:23:50.335873 ignition[780]: Ignition finished successfully Nov 13 09:23:50.345564 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 13 09:23:50.363191 ignition[786]: Ignition 2.20.0 Nov 13 09:23:50.363213 ignition[786]: Stage: disks Nov 13 09:23:50.363492 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 13 09:23:50.365968 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 13 09:23:50.363512 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:23:50.364750 ignition[786]: disks: disks passed Nov 13 09:23:50.368309 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 13 09:23:50.364826 ignition[786]: Ignition finished successfully Nov 13 09:23:50.369876 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 09:23:50.371154 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 09:23:50.372651 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 09:23:50.373912 systemd[1]: Reached target basic.target - Basic System. Nov 13 09:23:50.384587 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 13 09:23:50.402220 systemd-fsck[794]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 13 09:23:50.404771 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 13 09:23:50.409504 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 13 09:23:50.527374 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 13 09:23:50.528853 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 13 09:23:50.530978 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 13 09:23:50.536472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 09:23:50.540625 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 13 09:23:50.542532 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 13 09:23:50.546572 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 13 09:23:50.547381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 13 09:23:50.547426 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 09:23:50.564148 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Nov 13 09:23:50.564183 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 09:23:50.564203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 09:23:50.564220 kernel: BTRFS info (device vda6): using free space tree Nov 13 09:23:50.564266 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 09:23:50.564589 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 13 09:23:50.576184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 09:23:50.584604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 13 09:23:50.675466 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Nov 13 09:23:50.684902 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Nov 13 09:23:50.690456 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Nov 13 09:23:50.699595 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Nov 13 09:23:50.804184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 13 09:23:50.809518 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 13 09:23:50.813563 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 13 09:23:50.826386 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 09:23:50.827589 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 13 09:23:50.863242 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 13 09:23:50.866289 ignition[918]: INFO : Ignition 2.20.0 Nov 13 09:23:50.866289 ignition[918]: INFO : Stage: mount Nov 13 09:23:50.869079 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 09:23:50.869079 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:23:50.869079 ignition[918]: INFO : mount: mount passed Nov 13 09:23:50.869079 ignition[918]: INFO : Ignition finished successfully Nov 13 09:23:50.868893 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 13 09:23:51.356827 systemd-networkd[770]: eth0: Gained IPv6LL Nov 13 09:23:52.863421 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:932b:24:19ff:fee6:4cae/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:932b:24:19ff:fee6:4cae/64 assigned by NDisc. Nov 13 09:23:52.863437 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 13 09:23:57.750846 coreos-metadata[804]: Nov 13 09:23:57.750 WARN failed to locate config-drive, using the metadata service API instead Nov 13 09:23:57.774051 coreos-metadata[804]: Nov 13 09:23:57.773 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 13 09:23:57.787506 coreos-metadata[804]: Nov 13 09:23:57.787 INFO Fetch successful Nov 13 09:23:57.789476 coreos-metadata[804]: Nov 13 09:23:57.789 INFO wrote hostname srv-douj7.gb1.brightbox.com to /sysroot/etc/hostname Nov 13 09:23:57.790901 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 13 09:23:57.791075 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 13 09:23:57.799561 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 13 09:23:57.815604 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 09:23:57.828373 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (936) Nov 13 09:23:57.840806 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 09:23:57.840916 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 09:23:57.840935 kernel: BTRFS info (device vda6): using free space tree Nov 13 09:23:57.849375 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 09:23:57.853047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 09:23:57.882943 ignition[953]: INFO : Ignition 2.20.0 Nov 13 09:23:57.885368 ignition[953]: INFO : Stage: files Nov 13 09:23:57.885368 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 09:23:57.885368 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:23:57.887718 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Nov 13 09:23:57.888581 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 13 09:23:57.888581 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 13 09:23:57.891639 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 13 09:23:57.892622 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 13 09:23:57.892622 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 13 09:23:57.892418 unknown[953]: wrote ssh authorized keys file for user: core Nov 13 09:23:57.895432 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 13 09:23:57.895432 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 13 09:23:57.895432 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 09:23:57.895432 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 13 09:23:58.219079 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 13 09:24:00.345649 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 09:24:00.354494 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 09:24:00.354494 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 13 09:24:00.904559 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 13 09:24:01.202718 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 09:24:01.202718 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 09:24:01.205764 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 13 09:24:01.697638 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 13 09:24:03.474746 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 09:24:03.474746 ignition[953]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 13 09:24:03.479194 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 13 09:24:03.494363 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 13 09:24:03.494363 ignition[953]: INFO : files: files passed Nov 13 09:24:03.494363 ignition[953]: INFO : Ignition finished successfully Nov 13 09:24:03.481253 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 13 09:24:03.494705 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 13 09:24:03.504695 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 13 09:24:03.512241 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 13 09:24:03.512451 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 13 09:24:03.522990 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 09:24:03.522990 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 13 09:24:03.526464 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 09:24:03.528715 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 09:24:03.530607 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 13 09:24:03.537776 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 13 09:24:03.593775 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 13 09:24:03.594108 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 13 09:24:03.595809 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 13 09:24:03.597502 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 13 09:24:03.599293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 13 09:24:03.611673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 13 09:24:03.631166 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 09:24:03.635614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 13 09:24:03.652119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 13 09:24:03.653078 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 09:24:03.654700 systemd[1]: Stopped target timers.target - Timer Units. Nov 13 09:24:03.656111 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 13 09:24:03.656295 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 09:24:03.658026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 13 09:24:03.658906 systemd[1]: Stopped target basic.target - Basic System. Nov 13 09:24:03.660425 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 13 09:24:03.661645 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 09:24:03.662969 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 13 09:24:03.664459 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 13 09:24:03.665941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 09:24:03.667501 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 13 09:24:03.668938 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 13 09:24:03.672051 systemd[1]: Stopped target swap.target - Swaps. Nov 13 09:24:03.672762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 13 09:24:03.672963 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 13 09:24:03.674873 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 13 09:24:03.675737 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 09:24:03.677219 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 13 09:24:03.677646 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 09:24:03.678785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 13 09:24:03.678999 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 13 09:24:03.680904 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 13 09:24:03.681085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 09:24:03.682870 systemd[1]: ignition-files.service: Deactivated successfully. Nov 13 09:24:03.683039 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 13 09:24:03.696667 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 13 09:24:03.698396 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 13 09:24:03.699086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 13 09:24:03.700089 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 09:24:03.703510 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 13 09:24:03.703662 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 09:24:03.713858 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 13 09:24:03.714835 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 13 09:24:03.731008 ignition[1007]: INFO : Ignition 2.20.0 Nov 13 09:24:03.734175 ignition[1007]: INFO : Stage: umount Nov 13 09:24:03.734175 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 09:24:03.734175 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 09:24:03.734175 ignition[1007]: INFO : umount: umount passed Nov 13 09:24:03.734175 ignition[1007]: INFO : Ignition finished successfully Nov 13 09:24:03.733081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 13 09:24:03.735738 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 13 09:24:03.735940 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 13 09:24:03.737971 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 13 09:24:03.738144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 13 09:24:03.739586 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 13 09:24:03.739674 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 13 09:24:03.740800 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 13 09:24:03.740863 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 13 09:24:03.742114 systemd[1]: Stopped target network.target - Network. Nov 13 09:24:03.743391 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 13 09:24:03.743466 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 09:24:03.745126 systemd[1]: Stopped target paths.target - Path Units. Nov 13 09:24:03.746399 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 13 09:24:03.750617 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 09:24:03.751672 systemd[1]: Stopped target slices.target - Slice Units. Nov 13 09:24:03.753109 systemd[1]: Stopped target sockets.target - Socket Units. Nov 13 09:24:03.754472 systemd[1]: iscsid.socket: Deactivated successfully. Nov 13 09:24:03.754557 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 09:24:03.755889 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 13 09:24:03.755961 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 09:24:03.757507 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 13 09:24:03.757587 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 13 09:24:03.758927 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 13 09:24:03.759007 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 13 09:24:03.761831 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 13 09:24:03.764905 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 13 09:24:03.765698 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 13 09:24:03.769565 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 13 09:24:03.769774 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 13 09:24:03.773783 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 13 09:24:03.773933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 13 09:24:03.782504 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 13 09:24:03.783734 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 13 09:24:03.783820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 09:24:03.786572 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 09:24:03.791867 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 13 09:24:03.792055 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 13 09:24:03.797177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 09:24:03.797815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 09:24:03.800026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 13 09:24:03.800134 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 13 09:24:03.801750 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 13 09:24:03.801814 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 09:24:03.803860 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 13 09:24:03.804142 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 09:24:03.810924 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 13 09:24:03.811068 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 13 09:24:03.812725 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 13 09:24:03.812782 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 09:24:03.814122 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 13 09:24:03.814194 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 13 09:24:03.818191 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 13 09:24:03.818260 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 13 09:24:03.819582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 09:24:03.819661 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 09:24:03.828611 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 13 09:24:03.831736 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 13 09:24:03.831819 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 09:24:03.834840 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 13 09:24:03.834910 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 09:24:03.838297 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 13 09:24:03.838383 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 09:24:03.840484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 09:24:03.840547 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 09:24:03.844218 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 13 09:24:03.844484 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 13 09:24:03.846245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 13 09:24:03.846401 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 13 09:24:03.937808 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 13 09:24:03.938040 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 13 09:24:03.940143 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 13 09:24:03.940880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 13 09:24:03.940977 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 13 09:24:03.966912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 13 09:24:03.976525 systemd[1]: Switching root. Nov 13 09:24:04.013567 systemd-journald[201]: Journal stopped Nov 13 09:24:05.466495 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 13 09:24:05.466649 kernel: SELinux: policy capability network_peer_controls=1 Nov 13 09:24:05.466687 kernel: SELinux: policy capability open_perms=1 Nov 13 09:24:05.466705 kernel: SELinux: policy capability extended_socket_class=1 Nov 13 09:24:05.466728 kernel: SELinux: policy capability always_check_network=0 Nov 13 09:24:05.466759 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 13 09:24:05.466778 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 13 09:24:05.466811 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 13 09:24:05.466845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 13 09:24:05.466869 kernel: audit: type=1403 audit(1731489844.287:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 13 09:24:05.466902 systemd[1]: Successfully loaded SELinux policy in 53.168ms. Nov 13 09:24:05.466965 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.111ms. Nov 13 09:24:05.466987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 09:24:05.467007 systemd[1]: Detected virtualization kvm. Nov 13 09:24:05.467026 systemd[1]: Detected architecture x86-64. Nov 13 09:24:05.467051 systemd[1]: Detected first boot. Nov 13 09:24:05.467103 systemd[1]: Hostname set to . Nov 13 09:24:05.467124 systemd[1]: Initializing machine ID from VM UUID. Nov 13 09:24:05.467158 zram_generator::config[1069]: No configuration found. Nov 13 09:24:05.467186 systemd[1]: Populated /etc with preset unit settings. Nov 13 09:24:05.467212 systemd[1]: Queued start job for default target multi-user.target. Nov 13 09:24:05.467231 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 13 09:24:05.467257 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 13 09:24:05.467283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 13 09:24:05.467317 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 13 09:24:05.467337 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 13 09:24:05.467391 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 13 09:24:05.467413 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 13 09:24:05.467432 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 13 09:24:05.467452 systemd[1]: Created slice user.slice - User and Session Slice. Nov 13 09:24:05.467477 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 09:24:05.467503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 09:24:05.467530 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 13 09:24:05.467563 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 13 09:24:05.467589 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 13 09:24:05.467609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 09:24:05.467636 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 13 09:24:05.467656 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 09:24:05.467675 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 13 09:24:05.467699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 09:24:05.467751 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 09:24:05.467772 systemd[1]: Reached target slices.target - Slice Units. Nov 13 09:24:05.467791 systemd[1]: Reached target swap.target - Swaps. Nov 13 09:24:05.467811 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 13 09:24:05.467836 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 13 09:24:05.467868 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 09:24:05.467889 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 09:24:05.467917 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 09:24:05.467938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 09:24:05.467964 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 09:24:05.467990 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 13 09:24:05.468010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 13 09:24:05.468029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 13 09:24:05.468049 systemd[1]: Mounting media.mount - External Media Directory... Nov 13 09:24:05.468084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:05.468105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 13 09:24:05.468124 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 13 09:24:05.468143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 13 09:24:05.468168 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 13 09:24:05.468196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 09:24:05.468222 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 09:24:05.468247 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 13 09:24:05.468267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 09:24:05.468306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 09:24:05.468327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 09:24:05.468368 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 13 09:24:05.468390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 09:24:05.468416 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 09:24:05.468437 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 13 09:24:05.468457 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 13 09:24:05.468476 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 09:24:05.468521 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 09:24:05.468543 kernel: ACPI: bus type drm_connector registered Nov 13 09:24:05.468562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 13 09:24:05.468581 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 13 09:24:05.468600 kernel: fuse: init (API version 7.39) Nov 13 09:24:05.468618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 09:24:05.468637 kernel: loop: module loaded Nov 13 09:24:05.468656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:05.468675 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 13 09:24:05.468745 systemd-journald[1188]: Collecting audit messages is disabled. Nov 13 09:24:05.468802 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 13 09:24:05.468829 systemd-journald[1188]: Journal started Nov 13 09:24:05.468862 systemd-journald[1188]: Runtime Journal (/run/log/journal/e6aefc086c7a4721824b5d28f14f9dfd) is 4.7M, max 38.0M, 33.2M free. Nov 13 09:24:05.473397 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 09:24:05.475777 systemd[1]: Mounted media.mount - External Media Directory. Nov 13 09:24:05.477019 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 13 09:24:05.477970 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 13 09:24:05.478943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 13 09:24:05.480144 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 13 09:24:05.481452 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 09:24:05.482751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 13 09:24:05.483015 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 13 09:24:05.484664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 09:24:05.485005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 09:24:05.486296 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 09:24:05.486558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 09:24:05.487962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 09:24:05.488308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 09:24:05.489998 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 13 09:24:05.490324 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 13 09:24:05.491650 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 09:24:05.492055 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 09:24:05.493488 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 09:24:05.494720 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 13 09:24:05.496324 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 13 09:24:05.511273 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 13 09:24:05.518471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 13 09:24:05.529475 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 13 09:24:05.531454 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 09:24:05.547600 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 13 09:24:05.566624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 13 09:24:05.569495 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 09:24:05.578552 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 13 09:24:05.579747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 09:24:05.590267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 09:24:05.610541 systemd-journald[1188]: Time spent on flushing to /var/log/journal/e6aefc086c7a4721824b5d28f14f9dfd is 30.490ms for 1130 entries. Nov 13 09:24:05.610541 systemd-journald[1188]: System Journal (/var/log/journal/e6aefc086c7a4721824b5d28f14f9dfd) is 8.0M, max 584.8M, 576.8M free. Nov 13 09:24:05.670065 systemd-journald[1188]: Received client request to flush runtime journal. Nov 13 09:24:05.604549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 09:24:05.623362 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 13 09:24:05.624335 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 13 09:24:05.629209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 13 09:24:05.634215 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 13 09:24:05.674558 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 13 09:24:05.699520 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Nov 13 09:24:05.700143 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Nov 13 09:24:05.706968 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 09:24:05.713944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 09:24:05.724576 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 13 09:24:05.768100 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 13 09:24:05.781659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 09:24:05.797878 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 09:24:05.809670 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 13 09:24:05.817860 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Nov 13 09:24:05.818411 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Nov 13 09:24:05.828944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 09:24:05.842915 udevadm[1248]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 13 09:24:06.413818 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 13 09:24:06.427695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 09:24:06.466058 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Nov 13 09:24:06.496968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 09:24:06.507733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 09:24:06.541778 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 13 09:24:06.573780 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 13 09:24:06.614382 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1267) Nov 13 09:24:06.642390 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1267) Nov 13 09:24:06.650201 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 13 09:24:06.699387 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1265) Nov 13 09:24:06.709372 kernel: mousedev: PS/2 mouse device common for all mice Nov 13 09:24:06.739422 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 13 09:24:06.746396 kernel: ACPI: button: Power Button [PWRF] Nov 13 09:24:06.828778 systemd-networkd[1258]: lo: Link UP Nov 13 09:24:06.828792 systemd-networkd[1258]: lo: Gained carrier Nov 13 09:24:06.831502 systemd-networkd[1258]: Enumeration completed Nov 13 09:24:06.831705 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 09:24:06.836364 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 09:24:06.836374 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 09:24:06.839133 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 09:24:06.839193 systemd-networkd[1258]: eth0: Link UP Nov 13 09:24:06.839199 systemd-networkd[1258]: eth0: Gained carrier Nov 13 09:24:06.839213 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 09:24:06.842577 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 13 09:24:06.851501 systemd-networkd[1258]: eth0: DHCPv4 address 10.230.76.174/30, gateway 10.230.76.173 acquired from 10.230.76.173 Nov 13 09:24:06.890673 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 13 09:24:06.904447 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 13 09:24:06.914269 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 13 09:24:06.914619 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 13 09:24:06.905847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 09:24:06.979787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 09:24:07.114483 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 13 09:24:07.124711 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 13 09:24:07.196955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 09:24:07.214377 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 09:24:07.249653 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 13 09:24:07.251082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 09:24:07.256551 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 13 09:24:07.265372 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 09:24:07.298925 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 13 09:24:07.300993 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 09:24:07.301918 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 13 09:24:07.302072 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 09:24:07.303144 systemd[1]: Reached target machines.target - Containers. Nov 13 09:24:07.305647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 13 09:24:07.314590 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 13 09:24:07.319510 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 13 09:24:07.320412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 09:24:07.322596 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 13 09:24:07.330153 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 13 09:24:07.343609 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 13 09:24:07.347511 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 13 09:24:07.353589 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 13 09:24:07.377662 kernel: loop0: detected capacity change from 0 to 140992 Nov 13 09:24:07.398820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 13 09:24:07.401851 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 13 09:24:07.411378 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 13 09:24:07.430385 kernel: loop1: detected capacity change from 0 to 138184 Nov 13 09:24:07.476603 kernel: loop2: detected capacity change from 0 to 8 Nov 13 09:24:07.490385 kernel: loop3: detected capacity change from 0 to 211296 Nov 13 09:24:07.537960 kernel: loop4: detected capacity change from 0 to 140992 Nov 13 09:24:07.556380 kernel: loop5: detected capacity change from 0 to 138184 Nov 13 09:24:07.585429 kernel: loop6: detected capacity change from 0 to 8 Nov 13 09:24:07.590388 kernel: loop7: detected capacity change from 0 to 211296 Nov 13 09:24:07.603300 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 13 09:24:07.606056 (sd-merge)[1317]: Merged extensions into '/usr'. Nov 13 09:24:07.609967 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Nov 13 09:24:07.610010 systemd[1]: Reloading... Nov 13 09:24:07.710367 zram_generator::config[1345]: No configuration found. Nov 13 09:24:07.911839 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 13 09:24:07.919224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 09:24:08.004170 systemd[1]: Reloading finished in 393 ms. Nov 13 09:24:08.026795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 13 09:24:08.031810 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 13 09:24:08.046627 systemd[1]: Starting ensure-sysext.service... Nov 13 09:24:08.055582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 09:24:08.071749 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Nov 13 09:24:08.071780 systemd[1]: Reloading... Nov 13 09:24:08.093186 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 13 09:24:08.093875 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 13 09:24:08.096003 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 13 09:24:08.096451 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Nov 13 09:24:08.096577 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Nov 13 09:24:08.102957 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 09:24:08.102977 systemd-tmpfiles[1409]: Skipping /boot Nov 13 09:24:08.121989 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 09:24:08.122013 systemd-tmpfiles[1409]: Skipping /boot Nov 13 09:24:08.172368 zram_generator::config[1438]: No configuration found. Nov 13 09:24:08.361683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 09:24:08.443135 systemd[1]: Reloading finished in 370 ms. Nov 13 09:24:08.481365 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 09:24:08.501702 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 09:24:08.509643 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 13 09:24:08.522627 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 13 09:24:08.533666 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 09:24:08.547736 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 13 09:24:08.568275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:08.568636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 09:24:08.574719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 09:24:08.590823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 09:24:08.608747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 09:24:08.611743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 09:24:08.611944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:08.620653 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 13 09:24:08.629553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 09:24:08.629884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 09:24:08.640785 augenrules[1533]: No rules Nov 13 09:24:08.652703 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 09:24:08.653146 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 09:24:08.657293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 09:24:08.657596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 09:24:08.662277 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 09:24:08.662614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 09:24:08.675427 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 13 09:24:08.686074 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:08.686772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 09:24:08.702903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 09:24:08.705534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 09:24:08.705765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 09:24:08.718383 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 13 09:24:08.719172 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 09:24:08.719328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:08.721783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 13 09:24:08.723318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 09:24:08.724423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 09:24:08.741058 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:08.748769 systemd-resolved[1511]: Positive Trust Anchors: Nov 13 09:24:08.749317 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 09:24:08.749501 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 09:24:08.756874 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 09:24:08.757617 systemd-resolved[1511]: Using system hostname 'srv-douj7.gb1.brightbox.com'. Nov 13 09:24:08.757930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 09:24:08.776021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 09:24:08.782655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 09:24:08.788968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 09:24:08.794038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 09:24:08.797047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 09:24:08.797275 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 09:24:08.799118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 09:24:08.805269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 09:24:08.811768 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 13 09:24:08.814655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 09:24:08.814947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 09:24:08.819526 augenrules[1555]: /sbin/augenrules: No change Nov 13 09:24:08.820235 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 09:24:08.820527 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 09:24:08.824305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 09:24:08.825664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 09:24:08.829354 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 09:24:08.829668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 09:24:08.836132 systemd[1]: Finished ensure-sysext.service. Nov 13 09:24:08.845618 augenrules[1587]: No rules Nov 13 09:24:08.847467 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 09:24:08.847982 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 09:24:08.855035 systemd[1]: Reached target network.target - Network. Nov 13 09:24:08.856730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 09:24:08.857689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 09:24:08.857925 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 09:24:08.864562 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 13 09:24:08.892917 systemd-networkd[1258]: eth0: Gained IPv6LL Nov 13 09:24:08.899007 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 13 09:24:08.904054 systemd[1]: Reached target network-online.target - Network is Online. Nov 13 09:24:08.954275 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 13 09:24:08.957284 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 09:24:08.958152 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 13 09:24:08.959023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 13 09:24:08.959869 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 13 09:24:08.960683 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 13 09:24:08.960753 systemd[1]: Reached target paths.target - Path Units. Nov 13 09:24:08.961454 systemd[1]: Reached target time-set.target - System Time Set. Nov 13 09:24:08.962493 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 13 09:24:08.963412 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 13 09:24:08.964200 systemd[1]: Reached target timers.target - Timer Units. Nov 13 09:24:08.966734 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 13 09:24:08.970184 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 13 09:24:08.973421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 13 09:24:08.977857 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 13 09:24:08.978612 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 09:24:08.979358 systemd[1]: Reached target basic.target - Basic System. Nov 13 09:24:08.980410 systemd[1]: System is tainted: cgroupsv1 Nov 13 09:24:08.980472 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 13 09:24:08.980511 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 13 09:24:08.984493 systemd[1]: Starting containerd.service - containerd container runtime... Nov 13 09:24:08.988599 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 13 09:24:08.993268 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 13 09:24:09.022469 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 13 09:24:09.027384 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 13 09:24:09.029482 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 13 09:24:09.039457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:24:09.050795 jq[1607]: false Nov 13 09:24:09.060575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 13 09:24:09.067657 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 13 09:24:09.085681 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 13 09:24:09.086617 dbus-daemon[1606]: [system] SELinux support is enabled Nov 13 09:24:09.093594 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 13 09:24:09.097167 dbus-daemon[1606]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1258 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 13 09:24:09.106839 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 13 09:24:09.119015 extend-filesystems[1608]: Found loop4 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found loop5 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found loop6 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found loop7 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda1 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda2 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda3 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found usr Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda4 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda6 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda7 Nov 13 09:24:09.119015 extend-filesystems[1608]: Found vda9 Nov 13 09:24:09.119015 extend-filesystems[1608]: Checking size of /dev/vda9 Nov 13 09:24:09.187498 extend-filesystems[1608]: Resized partition /dev/vda9 Nov 13 09:24:09.121662 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 13 09:24:09.199895 extend-filesystems[1646]: resize2fs 1.47.1 (20-May-2024) Nov 13 09:24:09.220644 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 13 09:24:09.125232 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 13 09:24:09.141866 systemd[1]: Starting update-engine.service - Update Engine... Nov 13 09:24:09.156873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 13 09:24:09.159876 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 13 09:24:09.997303 jq[1636]: true Nov 13 09:24:09.174857 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 13 09:24:09.175229 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 13 09:24:09.188073 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 13 09:24:09.209762 systemd[1]: motdgen.service: Deactivated successfully. Nov 13 09:24:10.019581 jq[1652]: true Nov 13 09:24:09.210147 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 13 09:24:09.222669 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 13 09:24:09.223030 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 13 09:24:09.994190 systemd-timesyncd[1597]: Contacted time server 176.58.127.165:123 (0.flatcar.pool.ntp.org). Nov 13 09:24:09.994277 systemd-timesyncd[1597]: Initial clock synchronization to Wed 2024-11-13 09:24:09.992909 UTC. Nov 13 09:24:09.994978 systemd-resolved[1511]: Clock change detected. Flushing caches. Nov 13 09:24:10.039734 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 13 09:24:10.040018 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 13 09:24:10.041104 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 13 09:24:10.041140 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 13 09:24:10.043924 update_engine[1631]: I20241113 09:24:10.043619 1631 main.cc:92] Flatcar Update Engine starting Nov 13 09:24:10.045106 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 13 09:24:10.048499 (ntainerd)[1653]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 13 09:24:10.058986 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 13 09:24:10.065409 systemd[1]: Started update-engine.service - Update Engine. Nov 13 09:24:10.069296 update_engine[1631]: I20241113 09:24:10.067762 1631 update_check_scheduler.cc:74] Next update check in 10m36s Nov 13 09:24:10.073115 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 13 09:24:10.077029 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 13 09:24:10.092496 tar[1645]: linux-amd64/helm Nov 13 09:24:10.197793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1267) Nov 13 09:24:10.212573 systemd-logind[1628]: Watching system buttons on /dev/input/event2 (Power Button) Nov 13 09:24:10.216871 systemd-logind[1628]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 13 09:24:10.218348 systemd-logind[1628]: New seat seat0. Nov 13 09:24:10.221673 systemd[1]: Started systemd-logind.service - User Login Management. Nov 13 09:24:10.325089 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 13 09:24:10.381864 extend-filesystems[1646]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 13 09:24:10.381864 extend-filesystems[1646]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 13 09:24:10.381864 extend-filesystems[1646]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 13 09:24:10.381603 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 13 09:24:10.413066 sshd_keygen[1647]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 13 09:24:10.413262 extend-filesystems[1608]: Resized filesystem in /dev/vda9 Nov 13 09:24:10.419358 bash[1682]: Updated "/home/core/.ssh/authorized_keys" Nov 13 09:24:10.385070 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 13 09:24:10.391757 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 13 09:24:10.408432 systemd[1]: Starting sshkeys.service... Nov 13 09:24:10.483199 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 13 09:24:10.493336 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 13 09:24:10.513726 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 13 09:24:10.535022 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 13 09:24:10.626360 systemd[1]: issuegen.service: Deactivated successfully. Nov 13 09:24:10.626752 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 13 09:24:10.638226 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 13 09:24:10.641026 dbus-daemon[1606]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1667 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 13 09:24:10.644318 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 13 09:24:10.647329 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 13 09:24:10.659963 systemd[1]: Starting polkit.service - Authorization Manager... Nov 13 09:24:10.676373 locksmithd[1668]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 13 09:24:10.696464 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 13 09:24:10.708861 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 13 09:24:10.725405 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 13 09:24:10.726547 systemd[1]: Reached target getty.target - Login Prompts. Nov 13 09:24:10.737473 polkitd[1730]: Started polkitd version 121 Nov 13 09:24:10.751501 polkitd[1730]: Loading rules from directory /etc/polkit-1/rules.d Nov 13 09:24:10.752870 polkitd[1730]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 13 09:24:10.759045 polkitd[1730]: Finished loading, compiling and executing 2 rules Nov 13 09:24:10.763436 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 13 09:24:10.763840 containerd[1653]: time="2024-11-13T09:24:10.763674837Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 13 09:24:10.765256 polkitd[1730]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 13 09:24:10.765336 systemd[1]: Started polkit.service - Authorization Manager. Nov 13 09:24:10.796009 systemd-hostnamed[1667]: Hostname set to (static) Nov 13 09:24:10.809399 systemd-networkd[1258]: eth0: Ignoring DHCPv6 address 2a02:1348:179:932b:24:19ff:fee6:4cae/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:932b:24:19ff:fee6:4cae/64 assigned by NDisc. Nov 13 09:24:10.809411 systemd-networkd[1258]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 13 09:24:10.835620 containerd[1653]: time="2024-11-13T09:24:10.835284971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.840393 containerd[1653]: time="2024-11-13T09:24:10.840305893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 13 09:24:10.840481 containerd[1653]: time="2024-11-13T09:24:10.840407723Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 13 09:24:10.840481 containerd[1653]: time="2024-11-13T09:24:10.840441963Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.840794755Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.840852990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.840958685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.840981228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.841255279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.841283679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.841319436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 09:24:10.841905 containerd[1653]: time="2024-11-13T09:24:10.841337556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.845775 containerd[1653]: time="2024-11-13T09:24:10.845735142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.846550 containerd[1653]: time="2024-11-13T09:24:10.846516112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 13 09:24:10.848333 containerd[1653]: time="2024-11-13T09:24:10.848272128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 09:24:10.848416 containerd[1653]: time="2024-11-13T09:24:10.848338918Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 13 09:24:10.851470 containerd[1653]: time="2024-11-13T09:24:10.848555700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 13 09:24:10.851470 containerd[1653]: time="2024-11-13T09:24:10.851095601Z" level=info msg="metadata content store policy set" policy=shared Nov 13 09:24:10.863104 containerd[1653]: time="2024-11-13T09:24:10.862595080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 13 09:24:10.863104 containerd[1653]: time="2024-11-13T09:24:10.862700703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 13 09:24:10.863104 containerd[1653]: time="2024-11-13T09:24:10.862728331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 13 09:24:10.863104 containerd[1653]: time="2024-11-13T09:24:10.862751649Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 13 09:24:10.863104 containerd[1653]: time="2024-11-13T09:24:10.862772767Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 13 09:24:10.863104 containerd[1653]: time="2024-11-13T09:24:10.863008681Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 13 09:24:10.863413 containerd[1653]: time="2024-11-13T09:24:10.863391188Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 13 09:24:10.863593 containerd[1653]: time="2024-11-13T09:24:10.863561727Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 13 09:24:10.863674 containerd[1653]: time="2024-11-13T09:24:10.863596689Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 13 09:24:10.863674 containerd[1653]: time="2024-11-13T09:24:10.863619382Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 13 09:24:10.863674 containerd[1653]: time="2024-11-13T09:24:10.863640344Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863674 containerd[1653]: time="2024-11-13T09:24:10.863660849Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863993 containerd[1653]: time="2024-11-13T09:24:10.863680976Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863993 containerd[1653]: time="2024-11-13T09:24:10.863714316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863993 containerd[1653]: time="2024-11-13T09:24:10.863738693Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863993 containerd[1653]: time="2024-11-13T09:24:10.863766169Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863993 containerd[1653]: time="2024-11-13T09:24:10.863786889Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.863993 containerd[1653]: time="2024-11-13T09:24:10.863804760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.863832361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865626494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865660534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865686366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865707734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865729347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865749657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865771403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865794052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865820132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865865627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865891578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865926587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865952972Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 13 09:24:10.867859 containerd[1653]: time="2024-11-13T09:24:10.865992277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866016739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866036532Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866133277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866169383Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866191139Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866210120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866226298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866255799Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866281667Z" level=info msg="NRI interface is disabled by configuration." Nov 13 09:24:10.868424 containerd[1653]: time="2024-11-13T09:24:10.866310288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 13 09:24:10.868763 containerd[1653]: time="2024-11-13T09:24:10.866721090Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 13 09:24:10.868763 containerd[1653]: time="2024-11-13T09:24:10.866802563Z" level=info msg="Connect containerd service" Nov 13 09:24:10.868763 containerd[1653]: time="2024-11-13T09:24:10.866918303Z" level=info msg="using legacy CRI server" Nov 13 09:24:10.868763 containerd[1653]: time="2024-11-13T09:24:10.866940126Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 13 09:24:10.868763 containerd[1653]: time="2024-11-13T09:24:10.867163505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 13 09:24:10.871539 containerd[1653]: time="2024-11-13T09:24:10.870448334Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 09:24:10.872322 containerd[1653]: time="2024-11-13T09:24:10.872186241Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 13 09:24:10.872578 containerd[1653]: time="2024-11-13T09:24:10.872533538Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 13 09:24:10.872661 containerd[1653]: time="2024-11-13T09:24:10.872586934Z" level=info msg="Start subscribing containerd event" Nov 13 09:24:10.873422 containerd[1653]: time="2024-11-13T09:24:10.872694127Z" level=info msg="Start recovering state" Nov 13 09:24:10.873422 containerd[1653]: time="2024-11-13T09:24:10.872875415Z" level=info msg="Start event monitor" Nov 13 09:24:10.873422 containerd[1653]: time="2024-11-13T09:24:10.872931697Z" level=info msg="Start snapshots syncer" Nov 13 09:24:10.873422 containerd[1653]: time="2024-11-13T09:24:10.872950413Z" level=info msg="Start cni network conf syncer for default" Nov 13 09:24:10.873422 containerd[1653]: time="2024-11-13T09:24:10.872965008Z" level=info msg="Start streaming server" Nov 13 09:24:10.873422 containerd[1653]: time="2024-11-13T09:24:10.873090945Z" level=info msg="containerd successfully booted in 0.113353s" Nov 13 09:24:10.874101 systemd[1]: Started containerd.service - containerd container runtime. Nov 13 09:24:11.213301 tar[1645]: linux-amd64/LICENSE Nov 13 09:24:11.214869 tar[1645]: linux-amd64/README.md Nov 13 09:24:11.231601 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 13 09:24:11.498057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:24:11.508440 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 09:24:12.203199 kubelet[1763]: E1113 09:24:12.203027 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 09:24:12.208174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 09:24:12.208543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 09:24:13.194028 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 13 09:24:13.204420 systemd[1]: Started sshd@0-10.230.76.174:22-139.178.68.195:55426.service - OpenSSH per-connection server daemon (139.178.68.195:55426). Nov 13 09:24:14.119270 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 55426 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:14.121525 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:14.139086 systemd-logind[1628]: New session 1 of user core. Nov 13 09:24:14.140390 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 13 09:24:14.156376 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 13 09:24:14.178056 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 13 09:24:14.192555 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 13 09:24:14.198529 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 13 09:24:14.327021 systemd[1781]: Queued start job for default target default.target. Nov 13 09:24:14.328587 systemd[1781]: Created slice app.slice - User Application Slice. Nov 13 09:24:14.328730 systemd[1781]: Reached target paths.target - Paths. Nov 13 09:24:14.328753 systemd[1781]: Reached target timers.target - Timers. Nov 13 09:24:14.335028 systemd[1781]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 13 09:24:14.346265 systemd[1781]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 13 09:24:14.346365 systemd[1781]: Reached target sockets.target - Sockets. Nov 13 09:24:14.346389 systemd[1781]: Reached target basic.target - Basic System. Nov 13 09:24:14.346453 systemd[1781]: Reached target default.target - Main User Target. Nov 13 09:24:14.346506 systemd[1781]: Startup finished in 139ms. Nov 13 09:24:14.346677 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 13 09:24:14.365012 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 13 09:24:15.006970 systemd[1]: Started sshd@1-10.230.76.174:22-139.178.68.195:55430.service - OpenSSH per-connection server daemon (139.178.68.195:55430). Nov 13 09:24:15.795506 login[1733]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 13 09:24:15.798771 login[1734]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 13 09:24:15.803507 systemd-logind[1628]: New session 2 of user core. Nov 13 09:24:15.815706 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 13 09:24:15.822655 systemd-logind[1628]: New session 3 of user core. Nov 13 09:24:15.824960 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 13 09:24:15.895794 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 55430 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:15.898203 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:15.905696 systemd-logind[1628]: New session 4 of user core. Nov 13 09:24:15.915469 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 13 09:24:16.512938 sshd[1824]: Connection closed by 139.178.68.195 port 55430 Nov 13 09:24:16.513997 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:16.519329 systemd-logind[1628]: Session 4 logged out. Waiting for processes to exit. Nov 13 09:24:16.520339 systemd[1]: sshd@1-10.230.76.174:22-139.178.68.195:55430.service: Deactivated successfully. Nov 13 09:24:16.523699 systemd[1]: session-4.scope: Deactivated successfully. Nov 13 09:24:16.525715 systemd-logind[1628]: Removed session 4. Nov 13 09:24:16.663484 systemd[1]: Started sshd@2-10.230.76.174:22-139.178.68.195:56456.service - OpenSSH per-connection server daemon (139.178.68.195:56456). Nov 13 09:24:16.888618 coreos-metadata[1604]: Nov 13 09:24:16.888 WARN failed to locate config-drive, using the metadata service API instead Nov 13 09:24:16.914629 coreos-metadata[1604]: Nov 13 09:24:16.914 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 13 09:24:16.921255 coreos-metadata[1604]: Nov 13 09:24:16.921 INFO Fetch failed with 404: resource not found Nov 13 09:24:16.921255 coreos-metadata[1604]: Nov 13 09:24:16.921 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 13 09:24:16.921932 coreos-metadata[1604]: Nov 13 09:24:16.921 INFO Fetch successful Nov 13 09:24:16.922127 coreos-metadata[1604]: Nov 13 09:24:16.922 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 13 09:24:16.934879 coreos-metadata[1604]: Nov 13 09:24:16.934 INFO Fetch successful Nov 13 09:24:16.934879 coreos-metadata[1604]: Nov 13 09:24:16.934 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 13 09:24:16.955050 coreos-metadata[1604]: Nov 13 09:24:16.954 INFO Fetch successful Nov 13 09:24:16.955050 coreos-metadata[1604]: Nov 13 09:24:16.954 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 13 09:24:16.970536 coreos-metadata[1604]: Nov 13 09:24:16.970 INFO Fetch successful Nov 13 09:24:16.970536 coreos-metadata[1604]: Nov 13 09:24:16.970 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 13 09:24:16.990203 coreos-metadata[1604]: Nov 13 09:24:16.990 INFO Fetch successful Nov 13 09:24:17.027104 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 13 09:24:17.029602 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 13 09:24:17.559612 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 56456 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:17.561645 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:17.568643 systemd-logind[1628]: New session 5 of user core. Nov 13 09:24:17.576381 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 13 09:24:17.739257 coreos-metadata[1701]: Nov 13 09:24:17.739 WARN failed to locate config-drive, using the metadata service API instead Nov 13 09:24:17.761698 coreos-metadata[1701]: Nov 13 09:24:17.761 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 13 09:24:17.788782 coreos-metadata[1701]: Nov 13 09:24:17.788 INFO Fetch successful Nov 13 09:24:17.789021 coreos-metadata[1701]: Nov 13 09:24:17.788 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 13 09:24:17.825351 coreos-metadata[1701]: Nov 13 09:24:17.825 INFO Fetch successful Nov 13 09:24:17.827513 unknown[1701]: wrote ssh authorized keys file for user: core Nov 13 09:24:17.846521 update-ssh-keys[1847]: Updated "/home/core/.ssh/authorized_keys" Nov 13 09:24:17.849100 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 13 09:24:17.854337 systemd[1]: Finished sshkeys.service. Nov 13 09:24:17.859461 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 13 09:24:17.859703 systemd[1]: Startup finished in 18.860s (kernel) + 12.858s (userspace) = 31.719s. Nov 13 09:24:18.178067 sshd[1842]: Connection closed by 139.178.68.195 port 56456 Nov 13 09:24:18.178922 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:18.184558 systemd[1]: sshd@2-10.230.76.174:22-139.178.68.195:56456.service: Deactivated successfully. Nov 13 09:24:18.187416 systemd[1]: session-5.scope: Deactivated successfully. Nov 13 09:24:18.187417 systemd-logind[1628]: Session 5 logged out. Waiting for processes to exit. Nov 13 09:24:18.189365 systemd-logind[1628]: Removed session 5. Nov 13 09:24:22.459064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 13 09:24:22.466119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:24:22.650090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:24:22.654581 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 09:24:22.743188 kubelet[1869]: E1113 09:24:22.742683 1869 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 09:24:22.748030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 09:24:22.748427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 09:24:28.328240 systemd[1]: Started sshd@3-10.230.76.174:22-139.178.68.195:55980.service - OpenSSH per-connection server daemon (139.178.68.195:55980). Nov 13 09:24:29.227254 sshd[1879]: Accepted publickey for core from 139.178.68.195 port 55980 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:29.229419 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:29.239059 systemd-logind[1628]: New session 6 of user core. Nov 13 09:24:29.241447 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 13 09:24:29.846573 sshd[1882]: Connection closed by 139.178.68.195 port 55980 Nov 13 09:24:29.847600 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:29.852520 systemd[1]: sshd@3-10.230.76.174:22-139.178.68.195:55980.service: Deactivated successfully. Nov 13 09:24:29.855688 systemd-logind[1628]: Session 6 logged out. Waiting for processes to exit. Nov 13 09:24:29.856535 systemd[1]: session-6.scope: Deactivated successfully. Nov 13 09:24:29.858365 systemd-logind[1628]: Removed session 6. Nov 13 09:24:30.010046 systemd[1]: Started sshd@4-10.230.76.174:22-139.178.68.195:55990.service - OpenSSH per-connection server daemon (139.178.68.195:55990). Nov 13 09:24:30.902048 sshd[1887]: Accepted publickey for core from 139.178.68.195 port 55990 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:30.904127 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:30.910941 systemd-logind[1628]: New session 7 of user core. Nov 13 09:24:30.921380 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 13 09:24:31.517769 sshd[1890]: Connection closed by 139.178.68.195 port 55990 Nov 13 09:24:31.517595 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:31.523373 systemd[1]: sshd@4-10.230.76.174:22-139.178.68.195:55990.service: Deactivated successfully. Nov 13 09:24:31.527280 systemd-logind[1628]: Session 7 logged out. Waiting for processes to exit. Nov 13 09:24:31.528069 systemd[1]: session-7.scope: Deactivated successfully. Nov 13 09:24:31.529279 systemd-logind[1628]: Removed session 7. Nov 13 09:24:31.672346 systemd[1]: Started sshd@5-10.230.76.174:22-139.178.68.195:56002.service - OpenSSH per-connection server daemon (139.178.68.195:56002). Nov 13 09:24:32.565340 sshd[1895]: Accepted publickey for core from 139.178.68.195 port 56002 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:32.567326 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:32.575096 systemd-logind[1628]: New session 8 of user core. Nov 13 09:24:32.581383 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 13 09:24:32.998732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 13 09:24:33.006185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:24:33.155074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:24:33.161009 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 09:24:33.189616 sshd[1898]: Connection closed by 139.178.68.195 port 56002 Nov 13 09:24:33.190537 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:33.194260 systemd-logind[1628]: Session 8 logged out. Waiting for processes to exit. Nov 13 09:24:33.194726 systemd[1]: sshd@5-10.230.76.174:22-139.178.68.195:56002.service: Deactivated successfully. Nov 13 09:24:33.198222 systemd[1]: session-8.scope: Deactivated successfully. Nov 13 09:24:33.199936 systemd-logind[1628]: Removed session 8. Nov 13 09:24:33.288443 kubelet[1912]: E1113 09:24:33.288256 1912 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 09:24:33.293036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 09:24:33.293380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 09:24:33.341677 systemd[1]: Started sshd@6-10.230.76.174:22-139.178.68.195:56018.service - OpenSSH per-connection server daemon (139.178.68.195:56018). Nov 13 09:24:34.232161 sshd[1924]: Accepted publickey for core from 139.178.68.195 port 56018 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:34.234151 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:34.242412 systemd-logind[1628]: New session 9 of user core. Nov 13 09:24:34.245335 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 13 09:24:34.727210 sudo[1928]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 13 09:24:34.727704 sudo[1928]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 09:24:34.744696 sudo[1928]: pam_unix(sudo:session): session closed for user root Nov 13 09:24:34.888921 sshd[1927]: Connection closed by 139.178.68.195 port 56018 Nov 13 09:24:34.890170 sshd-session[1924]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:34.894810 systemd[1]: sshd@6-10.230.76.174:22-139.178.68.195:56018.service: Deactivated successfully. Nov 13 09:24:34.899708 systemd[1]: session-9.scope: Deactivated successfully. Nov 13 09:24:34.900110 systemd-logind[1628]: Session 9 logged out. Waiting for processes to exit. Nov 13 09:24:34.902522 systemd-logind[1628]: Removed session 9. Nov 13 09:24:35.039312 systemd[1]: Started sshd@7-10.230.76.174:22-139.178.68.195:56026.service - OpenSSH per-connection server daemon (139.178.68.195:56026). Nov 13 09:24:35.942895 sshd[1933]: Accepted publickey for core from 139.178.68.195 port 56026 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:35.944728 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:35.951905 systemd-logind[1628]: New session 10 of user core. Nov 13 09:24:35.960930 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 13 09:24:36.420558 sudo[1938]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 13 09:24:36.421600 sudo[1938]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 09:24:36.427236 sudo[1938]: pam_unix(sudo:session): session closed for user root Nov 13 09:24:36.435510 sudo[1937]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 13 09:24:36.436489 sudo[1937]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 09:24:36.457361 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 09:24:36.498607 augenrules[1960]: No rules Nov 13 09:24:36.499561 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 09:24:36.499989 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 09:24:36.503098 sudo[1937]: pam_unix(sudo:session): session closed for user root Nov 13 09:24:36.646704 sshd[1936]: Connection closed by 139.178.68.195 port 56026 Nov 13 09:24:36.647795 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Nov 13 09:24:36.651896 systemd[1]: sshd@7-10.230.76.174:22-139.178.68.195:56026.service: Deactivated successfully. Nov 13 09:24:36.655688 systemd[1]: session-10.scope: Deactivated successfully. Nov 13 09:24:36.656420 systemd-logind[1628]: Session 10 logged out. Waiting for processes to exit. Nov 13 09:24:36.658209 systemd-logind[1628]: Removed session 10. Nov 13 09:24:36.799619 systemd[1]: Started sshd@8-10.230.76.174:22-139.178.68.195:42772.service - OpenSSH per-connection server daemon (139.178.68.195:42772). Nov 13 09:24:37.693893 sshd[1969]: Accepted publickey for core from 139.178.68.195 port 42772 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:24:37.695782 sshd-session[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:24:37.702515 systemd-logind[1628]: New session 11 of user core. Nov 13 09:24:37.708393 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 13 09:24:38.169509 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 13 09:24:38.170024 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 09:24:38.595485 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 13 09:24:38.597570 (dockerd)[1992]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 13 09:24:39.014901 dockerd[1992]: time="2024-11-13T09:24:39.014559921Z" level=info msg="Starting up" Nov 13 09:24:39.265520 dockerd[1992]: time="2024-11-13T09:24:39.265016835Z" level=info msg="Loading containers: start." Nov 13 09:24:39.493986 kernel: Initializing XFRM netlink socket Nov 13 09:24:39.624327 systemd-networkd[1258]: docker0: Link UP Nov 13 09:24:39.664310 dockerd[1992]: time="2024-11-13T09:24:39.664084288Z" level=info msg="Loading containers: done." Nov 13 09:24:39.689359 dockerd[1992]: time="2024-11-13T09:24:39.689071021Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 13 09:24:39.689359 dockerd[1992]: time="2024-11-13T09:24:39.689336532Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 13 09:24:39.689678 dockerd[1992]: time="2024-11-13T09:24:39.689536228Z" level=info msg="Daemon has completed initialization" Nov 13 09:24:39.731546 dockerd[1992]: time="2024-11-13T09:24:39.730346152Z" level=info msg="API listen on /run/docker.sock" Nov 13 09:24:39.732066 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 13 09:24:40.862549 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 13 09:24:41.046461 containerd[1653]: time="2024-11-13T09:24:41.046366852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 13 09:24:42.075320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338365760.mount: Deactivated successfully. Nov 13 09:24:43.425481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 13 09:24:43.438885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:24:43.621728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:24:43.637089 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 09:24:43.722183 kubelet[2258]: E1113 09:24:43.721691 2258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 09:24:43.725217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 09:24:43.725591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 09:24:44.059584 containerd[1653]: time="2024-11-13T09:24:44.059174375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:44.060786 containerd[1653]: time="2024-11-13T09:24:44.060690713Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140807" Nov 13 09:24:44.061507 containerd[1653]: time="2024-11-13T09:24:44.061470167Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:44.066447 containerd[1653]: time="2024-11-13T09:24:44.066374548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:44.068125 containerd[1653]: time="2024-11-13T09:24:44.067828308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 3.021344799s" Nov 13 09:24:44.068125 containerd[1653]: time="2024-11-13T09:24:44.067919814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 13 09:24:44.100868 containerd[1653]: time="2024-11-13T09:24:44.100797937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 13 09:24:46.539604 containerd[1653]: time="2024-11-13T09:24:46.538884373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:46.541120 containerd[1653]: time="2024-11-13T09:24:46.540825324Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218307" Nov 13 09:24:46.541988 containerd[1653]: time="2024-11-13T09:24:46.541915827Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:46.546876 containerd[1653]: time="2024-11-13T09:24:46.546818654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:46.553228 containerd[1653]: time="2024-11-13T09:24:46.552202234Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.451328472s" Nov 13 09:24:46.553228 containerd[1653]: time="2024-11-13T09:24:46.552264322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 13 09:24:46.585147 containerd[1653]: time="2024-11-13T09:24:46.585082633Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 13 09:24:48.366195 containerd[1653]: time="2024-11-13T09:24:48.366123660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:48.367624 containerd[1653]: time="2024-11-13T09:24:48.367579385Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332668" Nov 13 09:24:48.368472 containerd[1653]: time="2024-11-13T09:24:48.368400857Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:48.372097 containerd[1653]: time="2024-11-13T09:24:48.372065310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:48.373879 containerd[1653]: time="2024-11-13T09:24:48.373666206Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.788525703s" Nov 13 09:24:48.373879 containerd[1653]: time="2024-11-13T09:24:48.373733914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 13 09:24:48.405741 containerd[1653]: time="2024-11-13T09:24:48.405645939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 13 09:24:49.870482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234728091.mount: Deactivated successfully. Nov 13 09:24:50.744096 containerd[1653]: time="2024-11-13T09:24:50.744016662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:50.745383 containerd[1653]: time="2024-11-13T09:24:50.745100613Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616824" Nov 13 09:24:50.746216 containerd[1653]: time="2024-11-13T09:24:50.746139474Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:50.752286 containerd[1653]: time="2024-11-13T09:24:50.749447983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:50.752286 containerd[1653]: time="2024-11-13T09:24:50.750428739Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.344695664s" Nov 13 09:24:50.752286 containerd[1653]: time="2024-11-13T09:24:50.750467119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 13 09:24:50.783237 containerd[1653]: time="2024-11-13T09:24:50.783173740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 13 09:24:51.411924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246469129.mount: Deactivated successfully. Nov 13 09:24:52.725912 containerd[1653]: time="2024-11-13T09:24:52.725712468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:52.727460 containerd[1653]: time="2024-11-13T09:24:52.727410695Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Nov 13 09:24:52.728419 containerd[1653]: time="2024-11-13T09:24:52.728339423Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:52.734798 containerd[1653]: time="2024-11-13T09:24:52.734661824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:52.737457 containerd[1653]: time="2024-11-13T09:24:52.737233783Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.95359253s" Nov 13 09:24:52.737457 containerd[1653]: time="2024-11-13T09:24:52.737287533Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 13 09:24:52.772363 containerd[1653]: time="2024-11-13T09:24:52.772305307Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 13 09:24:53.471300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190611031.mount: Deactivated successfully. Nov 13 09:24:53.479451 containerd[1653]: time="2024-11-13T09:24:53.479365882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:53.480872 containerd[1653]: time="2024-11-13T09:24:53.480807453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Nov 13 09:24:53.482005 containerd[1653]: time="2024-11-13T09:24:53.481949513Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:53.487000 containerd[1653]: time="2024-11-13T09:24:53.485504070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:53.487000 containerd[1653]: time="2024-11-13T09:24:53.486834707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 714.210248ms" Nov 13 09:24:53.487000 containerd[1653]: time="2024-11-13T09:24:53.486889406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 13 09:24:53.518474 containerd[1653]: time="2024-11-13T09:24:53.518409922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 13 09:24:53.923312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 13 09:24:53.932417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:24:54.241169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:24:54.241967 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 09:24:54.335256 kubelet[2370]: E1113 09:24:54.335051 2370 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 09:24:54.337977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 09:24:54.338373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 09:24:54.392861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898467824.mount: Deactivated successfully. Nov 13 09:24:55.154770 update_engine[1631]: I20241113 09:24:55.154600 1631 update_attempter.cc:509] Updating boot flags... Nov 13 09:24:55.208913 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2426) Nov 13 09:24:55.351157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2428) Nov 13 09:24:57.504237 containerd[1653]: time="2024-11-13T09:24:57.502683628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:57.506330 containerd[1653]: time="2024-11-13T09:24:57.506284386Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Nov 13 09:24:57.511009 containerd[1653]: time="2024-11-13T09:24:57.510953980Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:57.515116 containerd[1653]: time="2024-11-13T09:24:57.515066767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:24:57.516928 containerd[1653]: time="2024-11-13T09:24:57.516886063Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.998060579s" Nov 13 09:24:57.517010 containerd[1653]: time="2024-11-13T09:24:57.516930547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 13 09:25:01.982190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:25:02.001422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:25:02.035144 systemd[1]: Reloading requested from client PID 2502 ('systemctl') (unit session-11.scope)... Nov 13 09:25:02.035190 systemd[1]: Reloading... Nov 13 09:25:02.206409 zram_generator::config[2542]: No configuration found. Nov 13 09:25:02.400363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 09:25:02.498705 systemd[1]: Reloading finished in 462 ms. Nov 13 09:25:02.563584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 13 09:25:02.563768 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 13 09:25:02.564301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:25:02.579391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:25:02.803178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:25:02.819873 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 09:25:02.895322 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 09:25:02.895322 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 09:25:02.895322 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 09:25:02.896147 kubelet[2618]: I1113 09:25:02.895409 2618 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 09:25:03.289907 kubelet[2618]: I1113 09:25:03.289262 2618 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 13 09:25:03.289907 kubelet[2618]: I1113 09:25:03.289323 2618 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 09:25:03.289907 kubelet[2618]: I1113 09:25:03.289606 2618 server.go:919] "Client rotation is on, will bootstrap in background" Nov 13 09:25:03.329734 kubelet[2618]: E1113 09:25:03.329619 2618 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.76.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.330229 kubelet[2618]: I1113 09:25:03.330028 2618 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 09:25:03.347037 kubelet[2618]: I1113 09:25:03.346562 2618 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 09:25:03.351433 kubelet[2618]: I1113 09:25:03.351380 2618 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 09:25:03.353938 kubelet[2618]: I1113 09:25:03.353878 2618 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 09:25:03.354255 kubelet[2618]: I1113 09:25:03.353965 2618 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 09:25:03.354255 kubelet[2618]: I1113 09:25:03.353988 2618 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 09:25:03.354255 kubelet[2618]: I1113 09:25:03.354241 2618 state_mem.go:36] "Initialized new in-memory state store" Nov 13 09:25:03.355728 kubelet[2618]: W1113 09:25:03.355617 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.76.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-douj7.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.355728 kubelet[2618]: E1113 09:25:03.355695 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.76.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-douj7.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.356237 kubelet[2618]: I1113 09:25:03.356170 2618 kubelet.go:396] "Attempting to sync node with API server" Nov 13 09:25:03.356237 kubelet[2618]: I1113 09:25:03.356218 2618 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 09:25:03.356368 kubelet[2618]: I1113 09:25:03.356292 2618 kubelet.go:312] "Adding apiserver pod source" Nov 13 09:25:03.356368 kubelet[2618]: I1113 09:25:03.356324 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 09:25:03.359343 kubelet[2618]: W1113 09:25:03.359120 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.76.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.359343 kubelet[2618]: E1113 09:25:03.359193 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.76.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.360766 kubelet[2618]: I1113 09:25:03.360221 2618 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 09:25:03.364954 kubelet[2618]: I1113 09:25:03.364681 2618 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 09:25:03.366445 kubelet[2618]: W1113 09:25:03.366031 2618 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 13 09:25:03.368557 kubelet[2618]: I1113 09:25:03.368224 2618 server.go:1256] "Started kubelet" Nov 13 09:25:03.371985 kubelet[2618]: I1113 09:25:03.371957 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 09:25:03.379712 kubelet[2618]: E1113 09:25:03.378193 2618 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.76.174:6443/api/v1/namespaces/default/events\": dial tcp 10.230.76.174:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-douj7.gb1.brightbox.com.18077ce4ae6f3c10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-douj7.gb1.brightbox.com,UID:srv-douj7.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-douj7.gb1.brightbox.com,},FirstTimestamp:2024-11-13 09:25:03.368158224 +0000 UTC m=+0.541953884,LastTimestamp:2024-11-13 09:25:03.368158224 +0000 UTC m=+0.541953884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-douj7.gb1.brightbox.com,}" Nov 13 09:25:03.381100 kubelet[2618]: I1113 09:25:03.381064 2618 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 09:25:03.382807 kubelet[2618]: I1113 09:25:03.382776 2618 server.go:461] "Adding debug handlers to kubelet server" Nov 13 09:25:03.385012 kubelet[2618]: I1113 09:25:03.384567 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 09:25:03.385012 kubelet[2618]: I1113 09:25:03.384954 2618 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 09:25:03.385302 kubelet[2618]: I1113 09:25:03.385279 2618 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 09:25:03.388453 kubelet[2618]: E1113 09:25:03.388417 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-douj7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.174:6443: connect: connection refused" interval="200ms" Nov 13 09:25:03.388657 kubelet[2618]: I1113 09:25:03.388634 2618 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 13 09:25:03.390031 kubelet[2618]: W1113 09:25:03.389361 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.76.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.390031 kubelet[2618]: E1113 09:25:03.389453 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.76.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.390031 kubelet[2618]: I1113 09:25:03.389607 2618 reconciler_new.go:29] "Reconciler: start to sync state" Nov 13 09:25:03.392132 kubelet[2618]: I1113 09:25:03.392099 2618 factory.go:221] Registration of the systemd container factory successfully Nov 13 09:25:03.392287 kubelet[2618]: I1113 09:25:03.392255 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 09:25:03.394909 kubelet[2618]: E1113 09:25:03.394361 2618 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 09:25:03.395874 kubelet[2618]: I1113 09:25:03.395430 2618 factory.go:221] Registration of the containerd container factory successfully Nov 13 09:25:03.428643 kubelet[2618]: I1113 09:25:03.427785 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 09:25:03.429913 kubelet[2618]: I1113 09:25:03.429512 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 09:25:03.429913 kubelet[2618]: I1113 09:25:03.429587 2618 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 09:25:03.429913 kubelet[2618]: I1113 09:25:03.429632 2618 kubelet.go:2329] "Starting kubelet main sync loop" Nov 13 09:25:03.429913 kubelet[2618]: E1113 09:25:03.429768 2618 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 09:25:03.440062 kubelet[2618]: W1113 09:25:03.439986 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.76.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.440301 kubelet[2618]: E1113 09:25:03.440281 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.76.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:03.441809 kubelet[2618]: I1113 09:25:03.441781 2618 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 09:25:03.442045 kubelet[2618]: I1113 09:25:03.442026 2618 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 09:25:03.442606 kubelet[2618]: I1113 09:25:03.442210 2618 state_mem.go:36] "Initialized new in-memory state store" Nov 13 09:25:03.444492 kubelet[2618]: I1113 09:25:03.444381 2618 policy_none.go:49] "None policy: Start" Nov 13 09:25:03.445574 kubelet[2618]: I1113 09:25:03.445498 2618 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 09:25:03.445662 kubelet[2618]: I1113 09:25:03.445591 2618 state_mem.go:35] "Initializing new in-memory state store" Nov 13 09:25:03.457267 kubelet[2618]: I1113 09:25:03.457096 2618 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 09:25:03.459274 kubelet[2618]: I1113 09:25:03.459169 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 09:25:03.463420 kubelet[2618]: E1113 09:25:03.463358 2618 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-douj7.gb1.brightbox.com\" not found" Nov 13 09:25:03.489431 kubelet[2618]: I1113 09:25:03.488971 2618 kubelet_node_status.go:73] "Attempting to register node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.489632 kubelet[2618]: E1113 09:25:03.489538 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.76.174:6443/api/v1/nodes\": dial tcp 10.230.76.174:6443: connect: connection refused" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.530437 kubelet[2618]: I1113 09:25:03.530352 2618 topology_manager.go:215] "Topology Admit Handler" podUID="228c9c7cadd42a03bc61f5340252111f" podNamespace="kube-system" podName="kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.537018 kubelet[2618]: I1113 09:25:03.536538 2618 topology_manager.go:215] "Topology Admit Handler" podUID="0f7f58d9e5c268512ffd03ada62be972" podNamespace="kube-system" podName="kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.539903 kubelet[2618]: I1113 09:25:03.539828 2618 topology_manager.go:215] "Topology Admit Handler" podUID="dc8ed392fe05f4fa8fc0aa9619bf0ab5" podNamespace="kube-system" podName="kube-scheduler-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.589483 kubelet[2618]: E1113 09:25:03.589288 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-douj7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.174:6443: connect: connection refused" interval="400ms" Nov 13 09:25:03.590819 kubelet[2618]: I1113 09:25:03.590744 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-k8s-certs\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.590952 kubelet[2618]: I1113 09:25:03.590866 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-kubeconfig\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.590952 kubelet[2618]: I1113 09:25:03.590911 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.590952 kubelet[2618]: I1113 09:25:03.590948 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc8ed392fe05f4fa8fc0aa9619bf0ab5-kubeconfig\") pod \"kube-scheduler-srv-douj7.gb1.brightbox.com\" (UID: \"dc8ed392fe05f4fa8fc0aa9619bf0ab5\") " pod="kube-system/kube-scheduler-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.591103 kubelet[2618]: I1113 09:25:03.590979 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/228c9c7cadd42a03bc61f5340252111f-ca-certs\") pod \"kube-apiserver-srv-douj7.gb1.brightbox.com\" (UID: \"228c9c7cadd42a03bc61f5340252111f\") " pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.591103 kubelet[2618]: I1113 09:25:03.591008 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/228c9c7cadd42a03bc61f5340252111f-k8s-certs\") pod \"kube-apiserver-srv-douj7.gb1.brightbox.com\" (UID: \"228c9c7cadd42a03bc61f5340252111f\") " pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.591103 kubelet[2618]: I1113 09:25:03.591041 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/228c9c7cadd42a03bc61f5340252111f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-douj7.gb1.brightbox.com\" (UID: \"228c9c7cadd42a03bc61f5340252111f\") " pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.591103 kubelet[2618]: I1113 09:25:03.591083 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-ca-certs\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.591263 kubelet[2618]: I1113 09:25:03.591114 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-flexvolume-dir\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.693747 kubelet[2618]: I1113 09:25:03.693692 2618 kubelet_node_status.go:73] "Attempting to register node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.694390 kubelet[2618]: E1113 09:25:03.694344 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.76.174:6443/api/v1/nodes\": dial tcp 10.230.76.174:6443: connect: connection refused" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:03.854171 containerd[1653]: time="2024-11-13T09:25:03.853915351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-douj7.gb1.brightbox.com,Uid:228c9c7cadd42a03bc61f5340252111f,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:03.854171 containerd[1653]: time="2024-11-13T09:25:03.853940634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-douj7.gb1.brightbox.com,Uid:0f7f58d9e5c268512ffd03ada62be972,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:03.860981 containerd[1653]: time="2024-11-13T09:25:03.860484779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-douj7.gb1.brightbox.com,Uid:dc8ed392fe05f4fa8fc0aa9619bf0ab5,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:03.991023 kubelet[2618]: E1113 09:25:03.990976 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-douj7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.174:6443: connect: connection refused" interval="800ms" Nov 13 09:25:04.098109 kubelet[2618]: I1113 09:25:04.098063 2618 kubelet_node_status.go:73] "Attempting to register node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:04.098538 kubelet[2618]: E1113 09:25:04.098490 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.76.174:6443/api/v1/nodes\": dial tcp 10.230.76.174:6443: connect: connection refused" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:04.161572 kubelet[2618]: W1113 09:25:04.161376 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.76.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.161572 kubelet[2618]: E1113 09:25:04.161462 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.76.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.341569 kubelet[2618]: W1113 09:25:04.341403 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.76.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-douj7.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.341569 kubelet[2618]: E1113 09:25:04.341481 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.76.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-douj7.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.387199 kubelet[2618]: W1113 09:25:04.387098 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.76.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.387199 kubelet[2618]: E1113 09:25:04.387203 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.76.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.489916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180311376.mount: Deactivated successfully. Nov 13 09:25:04.497773 containerd[1653]: time="2024-11-13T09:25:04.497665339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 09:25:04.501162 containerd[1653]: time="2024-11-13T09:25:04.501115547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 13 09:25:04.501916 containerd[1653]: time="2024-11-13T09:25:04.501785813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 09:25:04.503209 containerd[1653]: time="2024-11-13T09:25:04.503174642Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 09:25:04.504753 containerd[1653]: time="2024-11-13T09:25:04.504699407Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 09:25:04.505898 containerd[1653]: time="2024-11-13T09:25:04.505529639Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 09:25:04.505898 containerd[1653]: time="2024-11-13T09:25:04.505810103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 09:25:04.509613 containerd[1653]: time="2024-11-13T09:25:04.509119820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 09:25:04.511385 containerd[1653]: time="2024-11-13T09:25:04.511348517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.991835ms" Nov 13 09:25:04.514666 containerd[1653]: time="2024-11-13T09:25:04.514625147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.084492ms" Nov 13 09:25:04.515492 containerd[1653]: time="2024-11-13T09:25:04.515451288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.836675ms" Nov 13 09:25:04.624982 kubelet[2618]: W1113 09:25:04.624811 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.76.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.624982 kubelet[2618]: E1113 09:25:04.624947 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.76.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:04.793050 kubelet[2618]: E1113 09:25:04.792886 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.76.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-douj7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.76.174:6443: connect: connection refused" interval="1.6s" Nov 13 09:25:04.865010 containerd[1653]: time="2024-11-13T09:25:04.864771634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:04.865789 containerd[1653]: time="2024-11-13T09:25:04.864985037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:04.865789 containerd[1653]: time="2024-11-13T09:25:04.865062827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:04.865977 containerd[1653]: time="2024-11-13T09:25:04.865525773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:04.868216 containerd[1653]: time="2024-11-13T09:25:04.868107515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:04.870237 containerd[1653]: time="2024-11-13T09:25:04.868672247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:04.870525 containerd[1653]: time="2024-11-13T09:25:04.870391196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:04.874601 containerd[1653]: time="2024-11-13T09:25:04.874443827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:04.898501 kubelet[2618]: E1113 09:25:04.898438 2618 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.76.174:6443/api/v1/namespaces/default/events\": dial tcp 10.230.76.174:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-douj7.gb1.brightbox.com.18077ce4ae6f3c10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-douj7.gb1.brightbox.com,UID:srv-douj7.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-douj7.gb1.brightbox.com,},FirstTimestamp:2024-11-13 09:25:03.368158224 +0000 UTC m=+0.541953884,LastTimestamp:2024-11-13 09:25:03.368158224 +0000 UTC m=+0.541953884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-douj7.gb1.brightbox.com,}" Nov 13 09:25:04.902435 kubelet[2618]: I1113 09:25:04.902397 2618 kubelet_node_status.go:73] "Attempting to register node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:04.902822 kubelet[2618]: E1113 09:25:04.902798 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.76.174:6443/api/v1/nodes\": dial tcp 10.230.76.174:6443: connect: connection refused" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:04.921100 containerd[1653]: time="2024-11-13T09:25:04.862947367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:04.923691 containerd[1653]: time="2024-11-13T09:25:04.922583037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:04.923691 containerd[1653]: time="2024-11-13T09:25:04.923565597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:04.924548 containerd[1653]: time="2024-11-13T09:25:04.924034711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:05.049107 containerd[1653]: time="2024-11-13T09:25:05.048345940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-douj7.gb1.brightbox.com,Uid:228c9c7cadd42a03bc61f5340252111f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b86dc84398dff0273fda42df144ae0f094c4d00075b2468481abeaf5e644645e\"" Nov 13 09:25:05.050650 containerd[1653]: time="2024-11-13T09:25:05.050477536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-douj7.gb1.brightbox.com,Uid:0f7f58d9e5c268512ffd03ada62be972,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6303ef9d7cda6cb6c8da6e0f6010bf992e617002835d5d9c2c7946bc749e207\"" Nov 13 09:25:05.057434 containerd[1653]: time="2024-11-13T09:25:05.057203794Z" level=info msg="CreateContainer within sandbox \"f6303ef9d7cda6cb6c8da6e0f6010bf992e617002835d5d9c2c7946bc749e207\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 13 09:25:05.059274 containerd[1653]: time="2024-11-13T09:25:05.058420188Z" level=info msg="CreateContainer within sandbox \"b86dc84398dff0273fda42df144ae0f094c4d00075b2468481abeaf5e644645e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 13 09:25:05.088052 containerd[1653]: time="2024-11-13T09:25:05.087969134Z" level=info msg="CreateContainer within sandbox \"f6303ef9d7cda6cb6c8da6e0f6010bf992e617002835d5d9c2c7946bc749e207\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4b11aed1a3798489a89c6b532b4ab65043545ccb335d5ed0c00e42257914f9f\"" Nov 13 09:25:05.089650 containerd[1653]: time="2024-11-13T09:25:05.089606303Z" level=info msg="StartContainer for \"e4b11aed1a3798489a89c6b532b4ab65043545ccb335d5ed0c00e42257914f9f\"" Nov 13 09:25:05.092775 containerd[1653]: time="2024-11-13T09:25:05.092728887Z" level=info msg="CreateContainer within sandbox \"b86dc84398dff0273fda42df144ae0f094c4d00075b2468481abeaf5e644645e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"389f561496c6e7014ec1c04180508b5ca9280c58e254c9901419a6488652c1e2\"" Nov 13 09:25:05.094858 containerd[1653]: time="2024-11-13T09:25:05.093929588Z" level=info msg="StartContainer for \"389f561496c6e7014ec1c04180508b5ca9280c58e254c9901419a6488652c1e2\"" Nov 13 09:25:05.115177 containerd[1653]: time="2024-11-13T09:25:05.115128519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-douj7.gb1.brightbox.com,Uid:dc8ed392fe05f4fa8fc0aa9619bf0ab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"787ee295cf4e41a9ece6bd6bbea5f6076f8dfc46e5f4bcb97d78022508578b08\"" Nov 13 09:25:05.119461 containerd[1653]: time="2024-11-13T09:25:05.119429884Z" level=info msg="CreateContainer within sandbox \"787ee295cf4e41a9ece6bd6bbea5f6076f8dfc46e5f4bcb97d78022508578b08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 13 09:25:05.155772 containerd[1653]: time="2024-11-13T09:25:05.155662980Z" level=info msg="CreateContainer within sandbox \"787ee295cf4e41a9ece6bd6bbea5f6076f8dfc46e5f4bcb97d78022508578b08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f719bbe6a27c137e4b40484bdc2f166278834b2bf0321743d78c8bc6d91b7b7e\"" Nov 13 09:25:05.157277 containerd[1653]: time="2024-11-13T09:25:05.157243994Z" level=info msg="StartContainer for \"f719bbe6a27c137e4b40484bdc2f166278834b2bf0321743d78c8bc6d91b7b7e\"" Nov 13 09:25:05.245697 containerd[1653]: time="2024-11-13T09:25:05.245535427Z" level=info msg="StartContainer for \"389f561496c6e7014ec1c04180508b5ca9280c58e254c9901419a6488652c1e2\" returns successfully" Nov 13 09:25:05.264491 containerd[1653]: time="2024-11-13T09:25:05.263958476Z" level=info msg="StartContainer for \"e4b11aed1a3798489a89c6b532b4ab65043545ccb335d5ed0c00e42257914f9f\" returns successfully" Nov 13 09:25:05.324434 containerd[1653]: time="2024-11-13T09:25:05.323411898Z" level=info msg="StartContainer for \"f719bbe6a27c137e4b40484bdc2f166278834b2bf0321743d78c8bc6d91b7b7e\" returns successfully" Nov 13 09:25:05.464515 kubelet[2618]: E1113 09:25:05.464473 2618 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.76.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.76.174:6443: connect: connection refused Nov 13 09:25:06.506192 kubelet[2618]: I1113 09:25:06.506140 2618 kubelet_node_status.go:73] "Attempting to register node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:08.418970 kubelet[2618]: I1113 09:25:08.418908 2618 kubelet_node_status.go:76] "Successfully registered node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:08.505144 kubelet[2618]: E1113 09:25:08.505082 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 13 09:25:09.360932 kubelet[2618]: I1113 09:25:09.358410 2618 apiserver.go:52] "Watching apiserver" Nov 13 09:25:09.390170 kubelet[2618]: I1113 09:25:09.390023 2618 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 13 09:25:11.478243 systemd[1]: Reloading requested from client PID 2892 ('systemctl') (unit session-11.scope)... Nov 13 09:25:11.478281 systemd[1]: Reloading... Nov 13 09:25:11.604872 zram_generator::config[2931]: No configuration found. Nov 13 09:25:11.799143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 09:25:11.919335 systemd[1]: Reloading finished in 440 ms. Nov 13 09:25:11.971764 kubelet[2618]: I1113 09:25:11.971460 2618 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 09:25:11.971814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:25:11.982350 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 09:25:11.982954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:25:11.993268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 09:25:12.196044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 09:25:12.219558 (kubelet)[3004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 09:25:12.336413 kubelet[3004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 09:25:12.336413 kubelet[3004]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 09:25:12.336413 kubelet[3004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 09:25:12.337565 kubelet[3004]: I1113 09:25:12.337005 3004 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 09:25:12.338922 sudo[3017]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 13 09:25:12.339467 sudo[3017]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 13 09:25:12.346961 kubelet[3004]: I1113 09:25:12.346731 3004 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 13 09:25:12.346961 kubelet[3004]: I1113 09:25:12.346764 3004 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 09:25:12.347945 kubelet[3004]: I1113 09:25:12.347064 3004 server.go:919] "Client rotation is on, will bootstrap in background" Nov 13 09:25:12.350327 kubelet[3004]: I1113 09:25:12.350287 3004 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 13 09:25:12.356931 kubelet[3004]: I1113 09:25:12.356886 3004 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 09:25:12.384407 kubelet[3004]: I1113 09:25:12.384274 3004 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 09:25:12.386563 kubelet[3004]: I1113 09:25:12.386488 3004 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 09:25:12.387171 kubelet[3004]: I1113 09:25:12.387039 3004 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 09:25:12.387171 kubelet[3004]: I1113 09:25:12.387086 3004 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 09:25:12.387723 kubelet[3004]: I1113 09:25:12.387102 3004 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 09:25:12.387723 kubelet[3004]: I1113 09:25:12.387530 3004 state_mem.go:36] "Initialized new in-memory state store" Nov 13 09:25:12.389506 kubelet[3004]: I1113 09:25:12.388942 3004 kubelet.go:396] "Attempting to sync node with API server" Nov 13 09:25:12.389506 kubelet[3004]: I1113 09:25:12.388982 3004 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 09:25:12.389506 kubelet[3004]: I1113 09:25:12.389022 3004 kubelet.go:312] "Adding apiserver pod source" Nov 13 09:25:12.389506 kubelet[3004]: I1113 09:25:12.389058 3004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 09:25:12.393629 kubelet[3004]: I1113 09:25:12.393595 3004 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 09:25:12.396396 kubelet[3004]: I1113 09:25:12.395321 3004 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 09:25:12.397574 kubelet[3004]: I1113 09:25:12.397536 3004 server.go:1256] "Started kubelet" Nov 13 09:25:12.409211 kubelet[3004]: I1113 09:25:12.408881 3004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 09:25:12.419258 kubelet[3004]: I1113 09:25:12.419229 3004 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 09:25:12.420709 kubelet[3004]: I1113 09:25:12.420687 3004 server.go:461] "Adding debug handlers to kubelet server" Nov 13 09:25:12.430390 kubelet[3004]: I1113 09:25:12.430283 3004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 09:25:12.431038 kubelet[3004]: I1113 09:25:12.431018 3004 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 09:25:12.433001 kubelet[3004]: I1113 09:25:12.432980 3004 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 13 09:25:12.440190 kubelet[3004]: I1113 09:25:12.433420 3004 reconciler_new.go:29] "Reconciler: start to sync state" Nov 13 09:25:12.440190 kubelet[3004]: I1113 09:25:12.438675 3004 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 09:25:12.440190 kubelet[3004]: I1113 09:25:12.439339 3004 factory.go:221] Registration of the systemd container factory successfully Nov 13 09:25:12.440190 kubelet[3004]: I1113 09:25:12.439442 3004 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 09:25:12.460500 kubelet[3004]: I1113 09:25:12.460343 3004 factory.go:221] Registration of the containerd container factory successfully Nov 13 09:25:12.464205 kubelet[3004]: E1113 09:25:12.464170 3004 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 09:25:12.501363 kubelet[3004]: I1113 09:25:12.501303 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 09:25:12.510180 kubelet[3004]: I1113 09:25:12.510130 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 09:25:12.510404 kubelet[3004]: I1113 09:25:12.510384 3004 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 09:25:12.510569 kubelet[3004]: I1113 09:25:12.510543 3004 kubelet.go:2329] "Starting kubelet main sync loop" Nov 13 09:25:12.515969 kubelet[3004]: E1113 09:25:12.515948 3004 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 09:25:12.555360 kubelet[3004]: I1113 09:25:12.555323 3004 kubelet_node_status.go:73] "Attempting to register node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.573127 kubelet[3004]: I1113 09:25:12.573089 3004 kubelet_node_status.go:112] "Node was previously registered" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.573451 kubelet[3004]: I1113 09:25:12.573432 3004 kubelet_node_status.go:76] "Successfully registered node" node="srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.617111 kubelet[3004]: E1113 09:25:12.616731 3004 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 13 09:25:12.623570 kubelet[3004]: I1113 09:25:12.623549 3004 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 09:25:12.623871 kubelet[3004]: I1113 09:25:12.623719 3004 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 09:25:12.623871 kubelet[3004]: I1113 09:25:12.623751 3004 state_mem.go:36] "Initialized new in-memory state store" Nov 13 09:25:12.624384 kubelet[3004]: I1113 09:25:12.624200 3004 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 13 09:25:12.624384 kubelet[3004]: I1113 09:25:12.624241 3004 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 13 09:25:12.627735 kubelet[3004]: I1113 09:25:12.627706 3004 policy_none.go:49] "None policy: Start" Nov 13 09:25:12.635888 kubelet[3004]: I1113 09:25:12.634051 3004 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 09:25:12.635888 kubelet[3004]: I1113 09:25:12.634101 3004 state_mem.go:35] "Initializing new in-memory state store" Nov 13 09:25:12.635888 kubelet[3004]: I1113 09:25:12.634473 3004 state_mem.go:75] "Updated machine memory state" Nov 13 09:25:12.637681 kubelet[3004]: I1113 09:25:12.637660 3004 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 09:25:12.642658 kubelet[3004]: I1113 09:25:12.642513 3004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 09:25:12.817955 kubelet[3004]: I1113 09:25:12.817764 3004 topology_manager.go:215] "Topology Admit Handler" podUID="228c9c7cadd42a03bc61f5340252111f" podNamespace="kube-system" podName="kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.818229 kubelet[3004]: I1113 09:25:12.818204 3004 topology_manager.go:215] "Topology Admit Handler" podUID="0f7f58d9e5c268512ffd03ada62be972" podNamespace="kube-system" podName="kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.818307 kubelet[3004]: I1113 09:25:12.818287 3004 topology_manager.go:215] "Topology Admit Handler" podUID="dc8ed392fe05f4fa8fc0aa9619bf0ab5" podNamespace="kube-system" podName="kube-scheduler-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.841931 kubelet[3004]: W1113 09:25:12.840540 3004 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 09:25:12.844300 kubelet[3004]: W1113 09:25:12.843026 3004 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 09:25:12.844469 kubelet[3004]: W1113 09:25:12.844177 3004 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 09:25:12.844638 kubelet[3004]: I1113 09:25:12.844586 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-k8s-certs\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.844638 kubelet[3004]: I1113 09:25:12.844636 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-kubeconfig\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.844892 kubelet[3004]: I1113 09:25:12.844691 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.844892 kubelet[3004]: I1113 09:25:12.844731 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc8ed392fe05f4fa8fc0aa9619bf0ab5-kubeconfig\") pod \"kube-scheduler-srv-douj7.gb1.brightbox.com\" (UID: \"dc8ed392fe05f4fa8fc0aa9619bf0ab5\") " pod="kube-system/kube-scheduler-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.844892 kubelet[3004]: I1113 09:25:12.844769 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/228c9c7cadd42a03bc61f5340252111f-ca-certs\") pod \"kube-apiserver-srv-douj7.gb1.brightbox.com\" (UID: \"228c9c7cadd42a03bc61f5340252111f\") " pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.844892 kubelet[3004]: I1113 09:25:12.844803 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-ca-certs\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.844892 kubelet[3004]: I1113 09:25:12.844851 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f7f58d9e5c268512ffd03ada62be972-flexvolume-dir\") pod \"kube-controller-manager-srv-douj7.gb1.brightbox.com\" (UID: \"0f7f58d9e5c268512ffd03ada62be972\") " pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.845410 kubelet[3004]: I1113 09:25:12.844911 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/228c9c7cadd42a03bc61f5340252111f-k8s-certs\") pod \"kube-apiserver-srv-douj7.gb1.brightbox.com\" (UID: \"228c9c7cadd42a03bc61f5340252111f\") " pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:12.845410 kubelet[3004]: I1113 09:25:12.844952 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/228c9c7cadd42a03bc61f5340252111f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-douj7.gb1.brightbox.com\" (UID: \"228c9c7cadd42a03bc61f5340252111f\") " pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" Nov 13 09:25:13.217962 sudo[3017]: pam_unix(sudo:session): session closed for user root Nov 13 09:25:13.393975 kubelet[3004]: I1113 09:25:13.393921 3004 apiserver.go:52] "Watching apiserver" Nov 13 09:25:13.439335 kubelet[3004]: I1113 09:25:13.439248 3004 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 13 09:25:13.579740 kubelet[3004]: I1113 09:25:13.577868 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-douj7.gb1.brightbox.com" podStartSLOduration=1.577775216 podStartE2EDuration="1.577775216s" podCreationTimestamp="2024-11-13 09:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:25:13.577305107 +0000 UTC m=+1.348559772" watchObservedRunningTime="2024-11-13 09:25:13.577775216 +0000 UTC m=+1.349029858" Nov 13 09:25:13.626866 kubelet[3004]: I1113 09:25:13.624770 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-douj7.gb1.brightbox.com" podStartSLOduration=1.624686975 podStartE2EDuration="1.624686975s" podCreationTimestamp="2024-11-13 09:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:25:13.591034915 +0000 UTC m=+1.362289580" watchObservedRunningTime="2024-11-13 09:25:13.624686975 +0000 UTC m=+1.395941622" Nov 13 09:25:13.664531 kubelet[3004]: I1113 09:25:13.663277 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-douj7.gb1.brightbox.com" podStartSLOduration=1.663194635 podStartE2EDuration="1.663194635s" podCreationTimestamp="2024-11-13 09:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:25:13.627560349 +0000 UTC m=+1.398814999" watchObservedRunningTime="2024-11-13 09:25:13.663194635 +0000 UTC m=+1.434449280" Nov 13 09:25:15.451459 sudo[1973]: pam_unix(sudo:session): session closed for user root Nov 13 09:25:15.595463 sshd[1972]: Connection closed by 139.178.68.195 port 42772 Nov 13 09:25:15.598274 sshd-session[1969]: pam_unix(sshd:session): session closed for user core Nov 13 09:25:15.605651 systemd[1]: sshd@8-10.230.76.174:22-139.178.68.195:42772.service: Deactivated successfully. Nov 13 09:25:15.610140 systemd[1]: session-11.scope: Deactivated successfully. Nov 13 09:25:15.610587 systemd-logind[1628]: Session 11 logged out. Waiting for processes to exit. Nov 13 09:25:15.614726 systemd-logind[1628]: Removed session 11. Nov 13 09:25:25.789067 kubelet[3004]: I1113 09:25:25.789013 3004 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 13 09:25:25.790858 containerd[1653]: time="2024-11-13T09:25:25.790767509Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 13 09:25:25.792664 kubelet[3004]: I1113 09:25:25.792005 3004 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 13 09:25:25.830414 kubelet[3004]: I1113 09:25:25.828418 3004 topology_manager.go:215] "Topology Admit Handler" podUID="01c4668a-d728-4948-900b-24e7beed2817" podNamespace="kube-system" podName="kube-proxy-m6zqt" Nov 13 09:25:25.836073 kubelet[3004]: I1113 09:25:25.835990 3004 topology_manager.go:215] "Topology Admit Handler" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" podNamespace="kube-system" podName="cilium-dnlj5" Nov 13 09:25:25.851502 kubelet[3004]: W1113 09:25:25.850392 3004 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.851502 kubelet[3004]: E1113 09:25:25.850509 3004 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.851502 kubelet[3004]: W1113 09:25:25.850585 3004 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.851502 kubelet[3004]: E1113 09:25:25.850614 3004 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.851502 kubelet[3004]: W1113 09:25:25.850806 3004 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.852917 kubelet[3004]: E1113 09:25:25.850829 3004 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.852917 kubelet[3004]: W1113 09:25:25.850905 3004 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.853702 kubelet[3004]: E1113 09:25:25.853536 3004 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.853702 kubelet[3004]: W1113 09:25:25.853668 3004 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.854320 kubelet[3004]: E1113 09:25:25.854183 3004 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-douj7.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-douj7.gb1.brightbox.com' and this object Nov 13 09:25:25.869876 kubelet[3004]: I1113 09:25:25.869814 3004 topology_manager.go:215] "Topology Admit Handler" podUID="2c02529b-41ea-4a82-910a-02b153778918" podNamespace="kube-system" podName="cilium-operator-5cc964979-dmmpw" Nov 13 09:25:25.933581 kubelet[3004]: I1113 09:25:25.933345 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckksg\" (UniqueName: \"kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-kube-api-access-ckksg\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.933581 kubelet[3004]: I1113 09:25:25.933483 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hostproc\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.933581 kubelet[3004]: I1113 09:25:25.933525 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-clustermesh-secrets\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.933581 kubelet[3004]: I1113 09:25:25.933557 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-net\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.933581 kubelet[3004]: I1113 09:25:25.933586 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-bpf-maps\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935214 kubelet[3004]: I1113 09:25:25.933618 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfdf\" (UniqueName: \"kubernetes.io/projected/2c02529b-41ea-4a82-910a-02b153778918-kube-api-access-jpfdf\") pod \"cilium-operator-5cc964979-dmmpw\" (UID: \"2c02529b-41ea-4a82-910a-02b153778918\") " pod="kube-system/cilium-operator-5cc964979-dmmpw" Nov 13 09:25:25.935214 kubelet[3004]: I1113 09:25:25.933668 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs6m\" (UniqueName: \"kubernetes.io/projected/01c4668a-d728-4948-900b-24e7beed2817-kube-api-access-pgs6m\") pod \"kube-proxy-m6zqt\" (UID: \"01c4668a-d728-4948-900b-24e7beed2817\") " pod="kube-system/kube-proxy-m6zqt" Nov 13 09:25:25.935214 kubelet[3004]: I1113 09:25:25.933702 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-run\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935214 kubelet[3004]: I1113 09:25:25.933740 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-xtables-lock\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935214 kubelet[3004]: I1113 09:25:25.933777 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-config-path\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935445 kubelet[3004]: I1113 09:25:25.933807 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01c4668a-d728-4948-900b-24e7beed2817-kube-proxy\") pod \"kube-proxy-m6zqt\" (UID: \"01c4668a-d728-4948-900b-24e7beed2817\") " pod="kube-system/kube-proxy-m6zqt" Nov 13 09:25:25.935445 kubelet[3004]: I1113 09:25:25.933912 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-cgroup\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935445 kubelet[3004]: I1113 09:25:25.933943 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-lib-modules\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935445 kubelet[3004]: I1113 09:25:25.933978 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01c4668a-d728-4948-900b-24e7beed2817-lib-modules\") pod \"kube-proxy-m6zqt\" (UID: \"01c4668a-d728-4948-900b-24e7beed2817\") " pod="kube-system/kube-proxy-m6zqt" Nov 13 09:25:25.935445 kubelet[3004]: I1113 09:25:25.934022 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-kernel\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935445 kubelet[3004]: I1113 09:25:25.934072 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-etc-cni-netd\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935711 kubelet[3004]: I1113 09:25:25.934123 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hubble-tls\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:25.935711 kubelet[3004]: I1113 09:25:25.934191 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01c4668a-d728-4948-900b-24e7beed2817-xtables-lock\") pod \"kube-proxy-m6zqt\" (UID: \"01c4668a-d728-4948-900b-24e7beed2817\") " pod="kube-system/kube-proxy-m6zqt" Nov 13 09:25:25.935711 kubelet[3004]: I1113 09:25:25.934228 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c02529b-41ea-4a82-910a-02b153778918-cilium-config-path\") pod \"cilium-operator-5cc964979-dmmpw\" (UID: \"2c02529b-41ea-4a82-910a-02b153778918\") " pod="kube-system/cilium-operator-5cc964979-dmmpw" Nov 13 09:25:25.935711 kubelet[3004]: I1113 09:25:25.934258 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cni-path\") pod \"cilium-dnlj5\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " pod="kube-system/cilium-dnlj5" Nov 13 09:25:26.762533 containerd[1653]: time="2024-11-13T09:25:26.761666789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6zqt,Uid:01c4668a-d728-4948-900b-24e7beed2817,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:26.798754 containerd[1653]: time="2024-11-13T09:25:26.798391181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:26.798754 containerd[1653]: time="2024-11-13T09:25:26.798508901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:26.798754 containerd[1653]: time="2024-11-13T09:25:26.798535064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:26.798754 containerd[1653]: time="2024-11-13T09:25:26.798687376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:26.867325 containerd[1653]: time="2024-11-13T09:25:26.866183437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m6zqt,Uid:01c4668a-d728-4948-900b-24e7beed2817,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eb33f2e468992b3e767349d1b53af816664571a926aa29160625fec7fed9784\"" Nov 13 09:25:26.873197 containerd[1653]: time="2024-11-13T09:25:26.872914777Z" level=info msg="CreateContainer within sandbox \"0eb33f2e468992b3e767349d1b53af816664571a926aa29160625fec7fed9784\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 13 09:25:26.889820 containerd[1653]: time="2024-11-13T09:25:26.889748660Z" level=info msg="CreateContainer within sandbox \"0eb33f2e468992b3e767349d1b53af816664571a926aa29160625fec7fed9784\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91380ddf1411d11964e2a7f015dc21b5497ea1c9cc36d6170333d117b5b3649f\"" Nov 13 09:25:26.890957 containerd[1653]: time="2024-11-13T09:25:26.890680297Z" level=info msg="StartContainer for \"91380ddf1411d11964e2a7f015dc21b5497ea1c9cc36d6170333d117b5b3649f\"" Nov 13 09:25:26.985077 containerd[1653]: time="2024-11-13T09:25:26.984947810Z" level=info msg="StartContainer for \"91380ddf1411d11964e2a7f015dc21b5497ea1c9cc36d6170333d117b5b3649f\" returns successfully" Nov 13 09:25:27.057109 kubelet[3004]: E1113 09:25:27.056922 3004 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Nov 13 09:25:27.058074 kubelet[3004]: E1113 09:25:27.057584 3004 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Nov 13 09:25:27.058074 kubelet[3004]: E1113 09:25:27.057781 3004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c02529b-41ea-4a82-910a-02b153778918-cilium-config-path podName:2c02529b-41ea-4a82-910a-02b153778918 nodeName:}" failed. No retries permitted until 2024-11-13 09:25:27.557072084 +0000 UTC m=+15.328326713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2c02529b-41ea-4a82-910a-02b153778918-cilium-config-path") pod "cilium-operator-5cc964979-dmmpw" (UID: "2c02529b-41ea-4a82-910a-02b153778918") : failed to sync configmap cache: timed out waiting for the condition Nov 13 09:25:27.058074 kubelet[3004]: E1113 09:25:27.057889 3004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-config-path podName:68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e nodeName:}" failed. No retries permitted until 2024-11-13 09:25:27.557874418 +0000 UTC m=+15.329129058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-config-path") pod "cilium-dnlj5" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e") : failed to sync configmap cache: timed out waiting for the condition Nov 13 09:25:27.593100 kubelet[3004]: I1113 09:25:27.593047 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m6zqt" podStartSLOduration=2.592903536 podStartE2EDuration="2.592903536s" podCreationTimestamp="2024-11-13 09:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:25:27.592458958 +0000 UTC m=+15.363713626" watchObservedRunningTime="2024-11-13 09:25:27.592903536 +0000 UTC m=+15.364158171" Nov 13 09:25:27.661622 containerd[1653]: time="2024-11-13T09:25:27.661490938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnlj5,Uid:68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:27.701069 containerd[1653]: time="2024-11-13T09:25:27.700525826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:27.701069 containerd[1653]: time="2024-11-13T09:25:27.700634719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:27.701069 containerd[1653]: time="2024-11-13T09:25:27.700657528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:27.701069 containerd[1653]: time="2024-11-13T09:25:27.700913495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:27.724014 containerd[1653]: time="2024-11-13T09:25:27.723966321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dmmpw,Uid:2c02529b-41ea-4a82-910a-02b153778918,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:27.735771 systemd[1]: run-containerd-runc-k8s.io-847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3-runc.o9p4mr.mount: Deactivated successfully. Nov 13 09:25:27.785613 containerd[1653]: time="2024-11-13T09:25:27.785461593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnlj5,Uid:68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\"" Nov 13 09:25:27.790073 containerd[1653]: time="2024-11-13T09:25:27.790030257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 13 09:25:27.801063 containerd[1653]: time="2024-11-13T09:25:27.800658336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:27.801063 containerd[1653]: time="2024-11-13T09:25:27.800738607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:27.801063 containerd[1653]: time="2024-11-13T09:25:27.800756168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:27.801063 containerd[1653]: time="2024-11-13T09:25:27.800896544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:27.879885 containerd[1653]: time="2024-11-13T09:25:27.878776212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dmmpw,Uid:2c02529b-41ea-4a82-910a-02b153778918,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c\"" Nov 13 09:25:35.095046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000895384.mount: Deactivated successfully. Nov 13 09:25:38.236411 containerd[1653]: time="2024-11-13T09:25:38.236220661Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:25:38.238535 containerd[1653]: time="2024-11-13T09:25:38.238449893Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735271" Nov 13 09:25:38.239639 containerd[1653]: time="2024-11-13T09:25:38.239578219Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:25:38.242776 containerd[1653]: time="2024-11-13T09:25:38.242098580Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.452004957s" Nov 13 09:25:38.242776 containerd[1653]: time="2024-11-13T09:25:38.242149452Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 13 09:25:38.244094 containerd[1653]: time="2024-11-13T09:25:38.244060878Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 13 09:25:38.246418 containerd[1653]: time="2024-11-13T09:25:38.246276922Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 09:25:38.493926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979972244.mount: Deactivated successfully. Nov 13 09:25:38.497290 containerd[1653]: time="2024-11-13T09:25:38.497055276Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\"" Nov 13 09:25:38.498448 containerd[1653]: time="2024-11-13T09:25:38.498416518Z" level=info msg="StartContainer for \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\"" Nov 13 09:25:38.685288 containerd[1653]: time="2024-11-13T09:25:38.685219608Z" level=info msg="StartContainer for \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\" returns successfully" Nov 13 09:25:38.904044 containerd[1653]: time="2024-11-13T09:25:38.870688770Z" level=info msg="shim disconnected" id=2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f namespace=k8s.io Nov 13 09:25:38.904044 containerd[1653]: time="2024-11-13T09:25:38.903908593Z" level=warning msg="cleaning up after shim disconnected" id=2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f namespace=k8s.io Nov 13 09:25:38.904044 containerd[1653]: time="2024-11-13T09:25:38.903945174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:25:39.488406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f-rootfs.mount: Deactivated successfully. Nov 13 09:25:39.626767 containerd[1653]: time="2024-11-13T09:25:39.626556845Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 09:25:39.690925 containerd[1653]: time="2024-11-13T09:25:39.690831528Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\"" Nov 13 09:25:39.692084 containerd[1653]: time="2024-11-13T09:25:39.692051407Z" level=info msg="StartContainer for \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\"" Nov 13 09:25:39.787556 containerd[1653]: time="2024-11-13T09:25:39.786252222Z" level=info msg="StartContainer for \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\" returns successfully" Nov 13 09:25:39.802433 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 09:25:39.804266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 09:25:39.804764 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 13 09:25:39.813592 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 09:25:39.853040 containerd[1653]: time="2024-11-13T09:25:39.852743652Z" level=info msg="shim disconnected" id=c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7 namespace=k8s.io Nov 13 09:25:39.853040 containerd[1653]: time="2024-11-13T09:25:39.853023593Z" level=warning msg="cleaning up after shim disconnected" id=c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7 namespace=k8s.io Nov 13 09:25:39.853382 containerd[1653]: time="2024-11-13T09:25:39.853041949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:25:39.856358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 09:25:40.490435 systemd[1]: run-containerd-runc-k8s.io-c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7-runc.xPfY3O.mount: Deactivated successfully. Nov 13 09:25:40.491102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7-rootfs.mount: Deactivated successfully. Nov 13 09:25:40.647595 containerd[1653]: time="2024-11-13T09:25:40.646236441Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 09:25:40.681704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528246240.mount: Deactivated successfully. Nov 13 09:25:40.692864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549494340.mount: Deactivated successfully. Nov 13 09:25:40.696354 containerd[1653]: time="2024-11-13T09:25:40.695591013Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\"" Nov 13 09:25:40.696973 containerd[1653]: time="2024-11-13T09:25:40.696945207Z" level=info msg="StartContainer for \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\"" Nov 13 09:25:40.840469 containerd[1653]: time="2024-11-13T09:25:40.840121388Z" level=info msg="StartContainer for \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\" returns successfully" Nov 13 09:25:40.996044 containerd[1653]: time="2024-11-13T09:25:40.995798650Z" level=info msg="shim disconnected" id=ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e namespace=k8s.io Nov 13 09:25:40.996044 containerd[1653]: time="2024-11-13T09:25:40.995913478Z" level=warning msg="cleaning up after shim disconnected" id=ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e namespace=k8s.io Nov 13 09:25:40.996851 containerd[1653]: time="2024-11-13T09:25:40.996467217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:25:41.289575 containerd[1653]: time="2024-11-13T09:25:41.289029831Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:25:41.291961 containerd[1653]: time="2024-11-13T09:25:41.291902993Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907249" Nov 13 09:25:41.293113 containerd[1653]: time="2024-11-13T09:25:41.293046856Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 09:25:41.296325 containerd[1653]: time="2024-11-13T09:25:41.295665965Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.051413682s" Nov 13 09:25:41.296325 containerd[1653]: time="2024-11-13T09:25:41.295715407Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 13 09:25:41.303673 containerd[1653]: time="2024-11-13T09:25:41.303003020Z" level=info msg="CreateContainer within sandbox \"7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 13 09:25:41.331508 containerd[1653]: time="2024-11-13T09:25:41.331438398Z" level=info msg="CreateContainer within sandbox \"7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\"" Nov 13 09:25:41.333580 containerd[1653]: time="2024-11-13T09:25:41.333532943Z" level=info msg="StartContainer for \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\"" Nov 13 09:25:41.434328 containerd[1653]: time="2024-11-13T09:25:41.434072382Z" level=info msg="StartContainer for \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\" returns successfully" Nov 13 09:25:41.497622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e-rootfs.mount: Deactivated successfully. Nov 13 09:25:41.700322 containerd[1653]: time="2024-11-13T09:25:41.700238262Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 09:25:41.711860 kubelet[3004]: I1113 09:25:41.711789 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-dmmpw" podStartSLOduration=3.295994462 podStartE2EDuration="16.71171411s" podCreationTimestamp="2024-11-13 09:25:25 +0000 UTC" firstStartedPulling="2024-11-13 09:25:27.880810269 +0000 UTC m=+15.652064899" lastFinishedPulling="2024-11-13 09:25:41.296529919 +0000 UTC m=+29.067784547" observedRunningTime="2024-11-13 09:25:41.710374726 +0000 UTC m=+29.481629390" watchObservedRunningTime="2024-11-13 09:25:41.71171411 +0000 UTC m=+29.482968751" Nov 13 09:25:41.740897 containerd[1653]: time="2024-11-13T09:25:41.739528365Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\"" Nov 13 09:25:41.740897 containerd[1653]: time="2024-11-13T09:25:41.740667163Z" level=info msg="StartContainer for \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\"" Nov 13 09:25:41.965366 containerd[1653]: time="2024-11-13T09:25:41.965168796Z" level=info msg="StartContainer for \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\" returns successfully" Nov 13 09:25:42.071828 containerd[1653]: time="2024-11-13T09:25:42.071621953Z" level=info msg="shim disconnected" id=d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8 namespace=k8s.io Nov 13 09:25:42.071828 containerd[1653]: time="2024-11-13T09:25:42.071723962Z" level=warning msg="cleaning up after shim disconnected" id=d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8 namespace=k8s.io Nov 13 09:25:42.071828 containerd[1653]: time="2024-11-13T09:25:42.071739439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:25:42.497621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8-rootfs.mount: Deactivated successfully. Nov 13 09:25:42.693814 containerd[1653]: time="2024-11-13T09:25:42.693733610Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 09:25:42.732317 containerd[1653]: time="2024-11-13T09:25:42.732259635Z" level=info msg="CreateContainer within sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\"" Nov 13 09:25:42.733614 containerd[1653]: time="2024-11-13T09:25:42.733525400Z" level=info msg="StartContainer for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\"" Nov 13 09:25:42.841669 containerd[1653]: time="2024-11-13T09:25:42.839799408Z" level=info msg="StartContainer for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" returns successfully" Nov 13 09:25:43.173593 kubelet[3004]: I1113 09:25:43.171923 3004 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 13 09:25:43.215483 kubelet[3004]: I1113 09:25:43.215152 3004 topology_manager.go:215] "Topology Admit Handler" podUID="55108022-f83d-4579-babb-f453621e57ee" podNamespace="kube-system" podName="coredns-76f75df574-prfwx" Nov 13 09:25:43.219502 kubelet[3004]: I1113 09:25:43.219457 3004 topology_manager.go:215] "Topology Admit Handler" podUID="acc4a569-ee58-4c87-976b-25035b0f97de" podNamespace="kube-system" podName="coredns-76f75df574-c956h" Nov 13 09:25:43.373583 kubelet[3004]: I1113 09:25:43.373289 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf59p\" (UniqueName: \"kubernetes.io/projected/55108022-f83d-4579-babb-f453621e57ee-kube-api-access-gf59p\") pod \"coredns-76f75df574-prfwx\" (UID: \"55108022-f83d-4579-babb-f453621e57ee\") " pod="kube-system/coredns-76f75df574-prfwx" Nov 13 09:25:43.373583 kubelet[3004]: I1113 09:25:43.373389 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55108022-f83d-4579-babb-f453621e57ee-config-volume\") pod \"coredns-76f75df574-prfwx\" (UID: \"55108022-f83d-4579-babb-f453621e57ee\") " pod="kube-system/coredns-76f75df574-prfwx" Nov 13 09:25:43.373583 kubelet[3004]: I1113 09:25:43.373432 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acc4a569-ee58-4c87-976b-25035b0f97de-config-volume\") pod \"coredns-76f75df574-c956h\" (UID: \"acc4a569-ee58-4c87-976b-25035b0f97de\") " pod="kube-system/coredns-76f75df574-c956h" Nov 13 09:25:43.373583 kubelet[3004]: I1113 09:25:43.373469 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4m8d\" (UniqueName: \"kubernetes.io/projected/acc4a569-ee58-4c87-976b-25035b0f97de-kube-api-access-n4m8d\") pod \"coredns-76f75df574-c956h\" (UID: \"acc4a569-ee58-4c87-976b-25035b0f97de\") " pod="kube-system/coredns-76f75df574-c956h" Nov 13 09:25:43.556228 containerd[1653]: time="2024-11-13T09:25:43.556040774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-prfwx,Uid:55108022-f83d-4579-babb-f453621e57ee,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:43.560096 containerd[1653]: time="2024-11-13T09:25:43.560028503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c956h,Uid:acc4a569-ee58-4c87-976b-25035b0f97de,Namespace:kube-system,Attempt:0,}" Nov 13 09:25:43.830983 kubelet[3004]: I1113 09:25:43.830595 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dnlj5" podStartSLOduration=8.375848035 podStartE2EDuration="18.830540524s" podCreationTimestamp="2024-11-13 09:25:25 +0000 UTC" firstStartedPulling="2024-11-13 09:25:27.788552411 +0000 UTC m=+15.559807045" lastFinishedPulling="2024-11-13 09:25:38.24324489 +0000 UTC m=+26.014499534" observedRunningTime="2024-11-13 09:25:43.825883334 +0000 UTC m=+31.597137981" watchObservedRunningTime="2024-11-13 09:25:43.830540524 +0000 UTC m=+31.601795175" Nov 13 09:25:45.714564 systemd-networkd[1258]: cilium_host: Link UP Nov 13 09:25:45.714821 systemd-networkd[1258]: cilium_net: Link UP Nov 13 09:25:45.716185 systemd-networkd[1258]: cilium_net: Gained carrier Nov 13 09:25:45.717340 systemd-networkd[1258]: cilium_host: Gained carrier Nov 13 09:25:45.718186 systemd-networkd[1258]: cilium_net: Gained IPv6LL Nov 13 09:25:45.719827 systemd-networkd[1258]: cilium_host: Gained IPv6LL Nov 13 09:25:45.881889 systemd-networkd[1258]: cilium_vxlan: Link UP Nov 13 09:25:45.881915 systemd-networkd[1258]: cilium_vxlan: Gained carrier Nov 13 09:25:46.388906 kernel: NET: Registered PF_ALG protocol family Nov 13 09:25:47.386248 systemd-networkd[1258]: lxc_health: Link UP Nov 13 09:25:47.393325 systemd-networkd[1258]: lxc_health: Gained carrier Nov 13 09:25:47.708766 systemd-networkd[1258]: lxc96d1b6d4b296: Link UP Nov 13 09:25:47.722982 kernel: eth0: renamed from tmpe65ef Nov 13 09:25:47.737060 systemd-networkd[1258]: lxc96d1b6d4b296: Gained carrier Nov 13 09:25:47.789143 systemd-networkd[1258]: lxcf6495e670cc3: Link UP Nov 13 09:25:47.807592 kernel: eth0: renamed from tmp25736 Nov 13 09:25:47.821451 systemd-networkd[1258]: lxcf6495e670cc3: Gained carrier Nov 13 09:25:47.834644 systemd-networkd[1258]: cilium_vxlan: Gained IPv6LL Nov 13 09:25:48.730288 systemd-networkd[1258]: lxc_health: Gained IPv6LL Nov 13 09:25:48.922015 systemd-networkd[1258]: lxc96d1b6d4b296: Gained IPv6LL Nov 13 09:25:49.690663 systemd-networkd[1258]: lxcf6495e670cc3: Gained IPv6LL Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.463499848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.463222109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.464051404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.464083365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.464247050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.464575896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:53.465874 containerd[1653]: time="2024-11-13T09:25:53.464673098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:53.471589 containerd[1653]: time="2024-11-13T09:25:53.471392193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:25:53.558173 systemd[1]: run-containerd-runc-k8s.io-257361c43cab6dfa956e7d2a0f85abb69a617db8fcf52ff553e7f771dae21ccb-runc.hKd2SD.mount: Deactivated successfully. Nov 13 09:25:53.672534 containerd[1653]: time="2024-11-13T09:25:53.672405725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-prfwx,Uid:55108022-f83d-4579-babb-f453621e57ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"e65ef5df09787b2d4dd6ab76332bf500bafd47778ee782e672f0f941bcab9536\"" Nov 13 09:25:53.693917 containerd[1653]: time="2024-11-13T09:25:53.693644240Z" level=info msg="CreateContainer within sandbox \"e65ef5df09787b2d4dd6ab76332bf500bafd47778ee782e672f0f941bcab9536\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 09:25:53.702415 containerd[1653]: time="2024-11-13T09:25:53.702370182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c956h,Uid:acc4a569-ee58-4c87-976b-25035b0f97de,Namespace:kube-system,Attempt:0,} returns sandbox id \"257361c43cab6dfa956e7d2a0f85abb69a617db8fcf52ff553e7f771dae21ccb\"" Nov 13 09:25:53.708741 containerd[1653]: time="2024-11-13T09:25:53.708564198Z" level=info msg="CreateContainer within sandbox \"257361c43cab6dfa956e7d2a0f85abb69a617db8fcf52ff553e7f771dae21ccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 09:25:53.723215 containerd[1653]: time="2024-11-13T09:25:53.723042946Z" level=info msg="CreateContainer within sandbox \"e65ef5df09787b2d4dd6ab76332bf500bafd47778ee782e672f0f941bcab9536\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24c4e6881be0311f71eb61bc58a886c22a83cd9aaee2cd6accab09c09fe7bc63\"" Nov 13 09:25:53.726180 containerd[1653]: time="2024-11-13T09:25:53.724564236Z" level=info msg="StartContainer for \"24c4e6881be0311f71eb61bc58a886c22a83cd9aaee2cd6accab09c09fe7bc63\"" Nov 13 09:25:53.726997 containerd[1653]: time="2024-11-13T09:25:53.726456633Z" level=info msg="CreateContainer within sandbox \"257361c43cab6dfa956e7d2a0f85abb69a617db8fcf52ff553e7f771dae21ccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fafddf78f9c5aa00173695a74c85677cbc602e1f0cd84e50c7ea8fc4688a0d0\"" Nov 13 09:25:53.727998 containerd[1653]: time="2024-11-13T09:25:53.727324193Z" level=info msg="StartContainer for \"5fafddf78f9c5aa00173695a74c85677cbc602e1f0cd84e50c7ea8fc4688a0d0\"" Nov 13 09:25:53.841611 containerd[1653]: time="2024-11-13T09:25:53.841532273Z" level=info msg="StartContainer for \"5fafddf78f9c5aa00173695a74c85677cbc602e1f0cd84e50c7ea8fc4688a0d0\" returns successfully" Nov 13 09:25:53.850073 containerd[1653]: time="2024-11-13T09:25:53.850005785Z" level=info msg="StartContainer for \"24c4e6881be0311f71eb61bc58a886c22a83cd9aaee2cd6accab09c09fe7bc63\" returns successfully" Nov 13 09:25:53.900673 kubelet[3004]: I1113 09:25:53.900610 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-prfwx" podStartSLOduration=28.900546233 podStartE2EDuration="28.900546233s" podCreationTimestamp="2024-11-13 09:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:25:53.898528512 +0000 UTC m=+41.669783160" watchObservedRunningTime="2024-11-13 09:25:53.900546233 +0000 UTC m=+41.671800883" Nov 13 09:25:53.902446 kubelet[3004]: I1113 09:25:53.900778 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c956h" podStartSLOduration=28.900753295 podStartE2EDuration="28.900753295s" podCreationTimestamp="2024-11-13 09:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:25:53.879592053 +0000 UTC m=+41.650846709" watchObservedRunningTime="2024-11-13 09:25:53.900753295 +0000 UTC m=+41.672007949" Nov 13 09:26:19.327353 systemd[1]: Started sshd@9-10.230.76.174:22-139.178.68.195:53842.service - OpenSSH per-connection server daemon (139.178.68.195:53842). Nov 13 09:26:20.260247 sshd[4377]: Accepted publickey for core from 139.178.68.195 port 53842 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:20.262893 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:20.278936 systemd-logind[1628]: New session 12 of user core. Nov 13 09:26:20.288802 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 13 09:26:21.400913 sshd[4380]: Connection closed by 139.178.68.195 port 53842 Nov 13 09:26:21.402052 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:21.407671 systemd[1]: sshd@9-10.230.76.174:22-139.178.68.195:53842.service: Deactivated successfully. Nov 13 09:26:21.411298 systemd[1]: session-12.scope: Deactivated successfully. Nov 13 09:26:21.411584 systemd-logind[1628]: Session 12 logged out. Waiting for processes to exit. Nov 13 09:26:21.414264 systemd-logind[1628]: Removed session 12. Nov 13 09:26:26.552263 systemd[1]: Started sshd@10-10.230.76.174:22-139.178.68.195:54504.service - OpenSSH per-connection server daemon (139.178.68.195:54504). Nov 13 09:26:27.470684 sshd[4392]: Accepted publickey for core from 139.178.68.195 port 54504 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:27.472737 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:27.480499 systemd-logind[1628]: New session 13 of user core. Nov 13 09:26:27.492519 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 13 09:26:28.198604 sshd[4397]: Connection closed by 139.178.68.195 port 54504 Nov 13 09:26:28.198436 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:28.203197 systemd[1]: sshd@10-10.230.76.174:22-139.178.68.195:54504.service: Deactivated successfully. Nov 13 09:26:28.208358 systemd-logind[1628]: Session 13 logged out. Waiting for processes to exit. Nov 13 09:26:28.209224 systemd[1]: session-13.scope: Deactivated successfully. Nov 13 09:26:28.211300 systemd-logind[1628]: Removed session 13. Nov 13 09:26:33.350440 systemd[1]: Started sshd@11-10.230.76.174:22-139.178.68.195:54512.service - OpenSSH per-connection server daemon (139.178.68.195:54512). Nov 13 09:26:34.255820 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 54512 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:34.258258 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:34.265573 systemd-logind[1628]: New session 14 of user core. Nov 13 09:26:34.270112 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 13 09:26:34.962590 sshd[4411]: Connection closed by 139.178.68.195 port 54512 Nov 13 09:26:34.963190 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:34.968576 systemd[1]: sshd@11-10.230.76.174:22-139.178.68.195:54512.service: Deactivated successfully. Nov 13 09:26:34.973499 systemd-logind[1628]: Session 14 logged out. Waiting for processes to exit. Nov 13 09:26:34.975355 systemd[1]: session-14.scope: Deactivated successfully. Nov 13 09:26:34.977146 systemd-logind[1628]: Removed session 14. Nov 13 09:26:40.113213 systemd[1]: Started sshd@12-10.230.76.174:22-139.178.68.195:36226.service - OpenSSH per-connection server daemon (139.178.68.195:36226). Nov 13 09:26:41.016912 sshd[4423]: Accepted publickey for core from 139.178.68.195 port 36226 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:41.018913 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:41.025987 systemd-logind[1628]: New session 15 of user core. Nov 13 09:26:41.040126 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 13 09:26:41.719705 sshd[4426]: Connection closed by 139.178.68.195 port 36226 Nov 13 09:26:41.720814 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:41.726174 systemd[1]: sshd@12-10.230.76.174:22-139.178.68.195:36226.service: Deactivated successfully. Nov 13 09:26:41.732266 systemd[1]: session-15.scope: Deactivated successfully. Nov 13 09:26:41.733944 systemd-logind[1628]: Session 15 logged out. Waiting for processes to exit. Nov 13 09:26:41.735647 systemd-logind[1628]: Removed session 15. Nov 13 09:26:41.870312 systemd[1]: Started sshd@13-10.230.76.174:22-139.178.68.195:36236.service - OpenSSH per-connection server daemon (139.178.68.195:36236). Nov 13 09:26:42.774214 sshd[4438]: Accepted publickey for core from 139.178.68.195 port 36236 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:42.776276 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:42.783984 systemd-logind[1628]: New session 16 of user core. Nov 13 09:26:42.789233 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 13 09:26:43.573555 sshd[4441]: Connection closed by 139.178.68.195 port 36236 Nov 13 09:26:43.573401 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:43.578889 systemd-logind[1628]: Session 16 logged out. Waiting for processes to exit. Nov 13 09:26:43.579203 systemd[1]: sshd@13-10.230.76.174:22-139.178.68.195:36236.service: Deactivated successfully. Nov 13 09:26:43.584548 systemd[1]: session-16.scope: Deactivated successfully. Nov 13 09:26:43.585811 systemd-logind[1628]: Removed session 16. Nov 13 09:26:43.730563 systemd[1]: Started sshd@14-10.230.76.174:22-139.178.68.195:36240.service - OpenSSH per-connection server daemon (139.178.68.195:36240). Nov 13 09:26:44.653113 sshd[4451]: Accepted publickey for core from 139.178.68.195 port 36240 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:44.655283 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:44.663099 systemd-logind[1628]: New session 17 of user core. Nov 13 09:26:44.674516 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 13 09:26:45.370495 sshd[4454]: Connection closed by 139.178.68.195 port 36240 Nov 13 09:26:45.371513 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:45.378362 systemd[1]: sshd@14-10.230.76.174:22-139.178.68.195:36240.service: Deactivated successfully. Nov 13 09:26:45.381763 systemd-logind[1628]: Session 17 logged out. Waiting for processes to exit. Nov 13 09:26:45.382900 systemd[1]: session-17.scope: Deactivated successfully. Nov 13 09:26:45.384975 systemd-logind[1628]: Removed session 17. Nov 13 09:26:50.523256 systemd[1]: Started sshd@15-10.230.76.174:22-139.178.68.195:37800.service - OpenSSH per-connection server daemon (139.178.68.195:37800). Nov 13 09:26:51.426078 sshd[4465]: Accepted publickey for core from 139.178.68.195 port 37800 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:51.428230 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:51.435987 systemd-logind[1628]: New session 18 of user core. Nov 13 09:26:51.440709 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 13 09:26:52.123900 sshd[4468]: Connection closed by 139.178.68.195 port 37800 Nov 13 09:26:52.124964 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:52.130614 systemd[1]: sshd@15-10.230.76.174:22-139.178.68.195:37800.service: Deactivated successfully. Nov 13 09:26:52.139368 systemd[1]: session-18.scope: Deactivated successfully. Nov 13 09:26:52.140825 systemd-logind[1628]: Session 18 logged out. Waiting for processes to exit. Nov 13 09:26:52.142607 systemd-logind[1628]: Removed session 18. Nov 13 09:26:57.275428 systemd[1]: Started sshd@16-10.230.76.174:22-139.178.68.195:41382.service - OpenSSH per-connection server daemon (139.178.68.195:41382). Nov 13 09:26:58.177225 sshd[4479]: Accepted publickey for core from 139.178.68.195 port 41382 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:58.180137 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:58.188134 systemd-logind[1628]: New session 19 of user core. Nov 13 09:26:58.191391 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 13 09:26:58.884487 sshd[4484]: Connection closed by 139.178.68.195 port 41382 Nov 13 09:26:58.885888 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Nov 13 09:26:58.890726 systemd[1]: sshd@16-10.230.76.174:22-139.178.68.195:41382.service: Deactivated successfully. Nov 13 09:26:58.894954 systemd[1]: session-19.scope: Deactivated successfully. Nov 13 09:26:58.895318 systemd-logind[1628]: Session 19 logged out. Waiting for processes to exit. Nov 13 09:26:58.899513 systemd-logind[1628]: Removed session 19. Nov 13 09:26:59.061256 systemd[1]: Started sshd@17-10.230.76.174:22-139.178.68.195:41394.service - OpenSSH per-connection server daemon (139.178.68.195:41394). Nov 13 09:26:59.964886 sshd[4495]: Accepted publickey for core from 139.178.68.195 port 41394 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:26:59.966278 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:26:59.974143 systemd-logind[1628]: New session 20 of user core. Nov 13 09:26:59.979404 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 13 09:27:01.059219 sshd[4498]: Connection closed by 139.178.68.195 port 41394 Nov 13 09:27:01.061077 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:01.068275 systemd[1]: sshd@17-10.230.76.174:22-139.178.68.195:41394.service: Deactivated successfully. Nov 13 09:27:01.072920 systemd-logind[1628]: Session 20 logged out. Waiting for processes to exit. Nov 13 09:27:01.073577 systemd[1]: session-20.scope: Deactivated successfully. Nov 13 09:27:01.076689 systemd-logind[1628]: Removed session 20. Nov 13 09:27:01.213218 systemd[1]: Started sshd@18-10.230.76.174:22-139.178.68.195:41410.service - OpenSSH per-connection server daemon (139.178.68.195:41410). Nov 13 09:27:02.131875 sshd[4508]: Accepted publickey for core from 139.178.68.195 port 41410 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:02.135158 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:02.142422 systemd-logind[1628]: New session 21 of user core. Nov 13 09:27:02.149326 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 13 09:27:04.900091 sshd[4511]: Connection closed by 139.178.68.195 port 41410 Nov 13 09:27:04.901031 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:04.906035 systemd-logind[1628]: Session 21 logged out. Waiting for processes to exit. Nov 13 09:27:04.906482 systemd[1]: sshd@18-10.230.76.174:22-139.178.68.195:41410.service: Deactivated successfully. Nov 13 09:27:04.911915 systemd[1]: session-21.scope: Deactivated successfully. Nov 13 09:27:04.914712 systemd-logind[1628]: Removed session 21. Nov 13 09:27:05.051514 systemd[1]: Started sshd@19-10.230.76.174:22-139.178.68.195:41426.service - OpenSSH per-connection server daemon (139.178.68.195:41426). Nov 13 09:27:05.963916 sshd[4528]: Accepted publickey for core from 139.178.68.195 port 41426 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:05.966193 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:05.975216 systemd-logind[1628]: New session 22 of user core. Nov 13 09:27:05.990070 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 13 09:27:06.877775 sshd[4532]: Connection closed by 139.178.68.195 port 41426 Nov 13 09:27:06.880165 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:06.884480 systemd[1]: sshd@19-10.230.76.174:22-139.178.68.195:41426.service: Deactivated successfully. Nov 13 09:27:06.890062 systemd-logind[1628]: Session 22 logged out. Waiting for processes to exit. Nov 13 09:27:06.891455 systemd[1]: session-22.scope: Deactivated successfully. Nov 13 09:27:06.894213 systemd-logind[1628]: Removed session 22. Nov 13 09:27:07.050474 systemd[1]: Started sshd@20-10.230.76.174:22-139.178.68.195:57554.service - OpenSSH per-connection server daemon (139.178.68.195:57554). Nov 13 09:27:07.940066 sshd[4541]: Accepted publickey for core from 139.178.68.195 port 57554 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:07.942010 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:07.949196 systemd-logind[1628]: New session 23 of user core. Nov 13 09:27:07.955251 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 13 09:27:08.648663 sshd[4544]: Connection closed by 139.178.68.195 port 57554 Nov 13 09:27:08.649208 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:08.656327 systemd[1]: sshd@20-10.230.76.174:22-139.178.68.195:57554.service: Deactivated successfully. Nov 13 09:27:08.659838 systemd-logind[1628]: Session 23 logged out. Waiting for processes to exit. Nov 13 09:27:08.660119 systemd[1]: session-23.scope: Deactivated successfully. Nov 13 09:27:08.662905 systemd-logind[1628]: Removed session 23. Nov 13 09:27:13.802471 systemd[1]: Started sshd@21-10.230.76.174:22-139.178.68.195:57558.service - OpenSSH per-connection server daemon (139.178.68.195:57558). Nov 13 09:27:14.702030 sshd[4560]: Accepted publickey for core from 139.178.68.195 port 57558 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:14.704344 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:14.711973 systemd-logind[1628]: New session 24 of user core. Nov 13 09:27:14.722083 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 13 09:27:15.406183 sshd[4563]: Connection closed by 139.178.68.195 port 57558 Nov 13 09:27:15.407785 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:15.414266 systemd-logind[1628]: Session 24 logged out. Waiting for processes to exit. Nov 13 09:27:15.417371 systemd[1]: sshd@21-10.230.76.174:22-139.178.68.195:57558.service: Deactivated successfully. Nov 13 09:27:15.422272 systemd[1]: session-24.scope: Deactivated successfully. Nov 13 09:27:15.423967 systemd-logind[1628]: Removed session 24. Nov 13 09:27:20.563009 systemd[1]: Started sshd@22-10.230.76.174:22-139.178.68.195:39614.service - OpenSSH per-connection server daemon (139.178.68.195:39614). Nov 13 09:27:21.448867 sshd[4574]: Accepted publickey for core from 139.178.68.195 port 39614 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:21.450986 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:21.458925 systemd-logind[1628]: New session 25 of user core. Nov 13 09:27:21.464308 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 13 09:27:22.155223 sshd[4577]: Connection closed by 139.178.68.195 port 39614 Nov 13 09:27:22.156210 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:22.161659 systemd[1]: sshd@22-10.230.76.174:22-139.178.68.195:39614.service: Deactivated successfully. Nov 13 09:27:22.165681 systemd[1]: session-25.scope: Deactivated successfully. Nov 13 09:27:22.167535 systemd-logind[1628]: Session 25 logged out. Waiting for processes to exit. Nov 13 09:27:22.169545 systemd-logind[1628]: Removed session 25. Nov 13 09:27:27.310170 systemd[1]: Started sshd@23-10.230.76.174:22-139.178.68.195:43196.service - OpenSSH per-connection server daemon (139.178.68.195:43196). Nov 13 09:27:28.203452 sshd[4590]: Accepted publickey for core from 139.178.68.195 port 43196 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:28.205393 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:28.212268 systemd-logind[1628]: New session 26 of user core. Nov 13 09:27:28.223345 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 13 09:27:28.904902 sshd[4593]: Connection closed by 139.178.68.195 port 43196 Nov 13 09:27:28.906096 sshd-session[4590]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:28.909776 systemd[1]: sshd@23-10.230.76.174:22-139.178.68.195:43196.service: Deactivated successfully. Nov 13 09:27:28.915262 systemd-logind[1628]: Session 26 logged out. Waiting for processes to exit. Nov 13 09:27:28.916373 systemd[1]: session-26.scope: Deactivated successfully. Nov 13 09:27:28.918305 systemd-logind[1628]: Removed session 26. Nov 13 09:27:29.055151 systemd[1]: Started sshd@24-10.230.76.174:22-139.178.68.195:43200.service - OpenSSH per-connection server daemon (139.178.68.195:43200). Nov 13 09:27:29.956491 sshd[4604]: Accepted publickey for core from 139.178.68.195 port 43200 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:29.958539 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:29.965523 systemd-logind[1628]: New session 27 of user core. Nov 13 09:27:29.972328 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 13 09:27:31.952402 containerd[1653]: time="2024-11-13T09:27:31.951690487Z" level=info msg="StopContainer for \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\" with timeout 30 (s)" Nov 13 09:27:31.958146 containerd[1653]: time="2024-11-13T09:27:31.956984092Z" level=info msg="Stop container \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\" with signal terminated" Nov 13 09:27:32.023834 containerd[1653]: time="2024-11-13T09:27:32.023635643Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 09:27:32.038532 containerd[1653]: time="2024-11-13T09:27:32.038480098Z" level=info msg="StopContainer for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" with timeout 2 (s)" Nov 13 09:27:32.039961 containerd[1653]: time="2024-11-13T09:27:32.039141151Z" level=info msg="Stop container \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" with signal terminated" Nov 13 09:27:32.041577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6-rootfs.mount: Deactivated successfully. Nov 13 09:27:32.051091 systemd-networkd[1258]: lxc_health: Link DOWN Nov 13 09:27:32.051102 systemd-networkd[1258]: lxc_health: Lost carrier Nov 13 09:27:32.058293 containerd[1653]: time="2024-11-13T09:27:32.056820629Z" level=info msg="shim disconnected" id=a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6 namespace=k8s.io Nov 13 09:27:32.059247 containerd[1653]: time="2024-11-13T09:27:32.058880391Z" level=warning msg="cleaning up after shim disconnected" id=a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6 namespace=k8s.io Nov 13 09:27:32.059247 containerd[1653]: time="2024-11-13T09:27:32.058907510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:32.099128 containerd[1653]: time="2024-11-13T09:27:32.098976692Z" level=info msg="StopContainer for \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\" returns successfully" Nov 13 09:27:32.103942 containerd[1653]: time="2024-11-13T09:27:32.103867218Z" level=info msg="StopPodSandbox for \"7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c\"" Nov 13 09:27:32.104490 containerd[1653]: time="2024-11-13T09:27:32.104222391Z" level=info msg="Container to stop \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 09:27:32.113193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c-shm.mount: Deactivated successfully. Nov 13 09:27:32.144469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb-rootfs.mount: Deactivated successfully. Nov 13 09:27:32.154392 containerd[1653]: time="2024-11-13T09:27:32.154032344Z" level=info msg="shim disconnected" id=96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb namespace=k8s.io Nov 13 09:27:32.154392 containerd[1653]: time="2024-11-13T09:27:32.154131694Z" level=warning msg="cleaning up after shim disconnected" id=96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb namespace=k8s.io Nov 13 09:27:32.154392 containerd[1653]: time="2024-11-13T09:27:32.154146703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:32.193350 containerd[1653]: time="2024-11-13T09:27:32.192304179Z" level=info msg="StopContainer for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" returns successfully" Nov 13 09:27:32.193350 containerd[1653]: time="2024-11-13T09:27:32.193150965Z" level=info msg="StopPodSandbox for \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\"" Nov 13 09:27:32.193350 containerd[1653]: time="2024-11-13T09:27:32.193192192Z" level=info msg="Container to stop \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 09:27:32.193350 containerd[1653]: time="2024-11-13T09:27:32.193239387Z" level=info msg="Container to stop \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 09:27:32.193350 containerd[1653]: time="2024-11-13T09:27:32.193253710Z" level=info msg="Container to stop \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 09:27:32.193350 containerd[1653]: time="2024-11-13T09:27:32.193269992Z" level=info msg="Container to stop \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 09:27:32.193783 containerd[1653]: time="2024-11-13T09:27:32.193284572Z" level=info msg="Container to stop \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 09:27:32.205446 containerd[1653]: time="2024-11-13T09:27:32.202361790Z" level=info msg="shim disconnected" id=7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c namespace=k8s.io Nov 13 09:27:32.205446 containerd[1653]: time="2024-11-13T09:27:32.202427297Z" level=warning msg="cleaning up after shim disconnected" id=7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c namespace=k8s.io Nov 13 09:27:32.205446 containerd[1653]: time="2024-11-13T09:27:32.202442329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:32.233018 containerd[1653]: time="2024-11-13T09:27:32.231887259Z" level=warning msg="cleanup warnings time=\"2024-11-13T09:27:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 09:27:32.234460 containerd[1653]: time="2024-11-13T09:27:32.234226645Z" level=info msg="TearDown network for sandbox \"7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c\" successfully" Nov 13 09:27:32.234460 containerd[1653]: time="2024-11-13T09:27:32.234263931Z" level=info msg="StopPodSandbox for \"7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c\" returns successfully" Nov 13 09:27:32.262026 containerd[1653]: time="2024-11-13T09:27:32.261919431Z" level=info msg="shim disconnected" id=847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3 namespace=k8s.io Nov 13 09:27:32.262262 containerd[1653]: time="2024-11-13T09:27:32.262089830Z" level=warning msg="cleaning up after shim disconnected" id=847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3 namespace=k8s.io Nov 13 09:27:32.262262 containerd[1653]: time="2024-11-13T09:27:32.262111063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:32.282007 containerd[1653]: time="2024-11-13T09:27:32.281924468Z" level=info msg="TearDown network for sandbox \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" successfully" Nov 13 09:27:32.282007 containerd[1653]: time="2024-11-13T09:27:32.281985915Z" level=info msg="StopPodSandbox for \"847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3\" returns successfully" Nov 13 09:27:32.335388 kubelet[3004]: I1113 09:27:32.335298 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cni-path\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.335388 kubelet[3004]: I1113 09:27:32.335413 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-xtables-lock\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337458 kubelet[3004]: I1113 09:27:32.335453 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-lib-modules\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337458 kubelet[3004]: I1113 09:27:32.335490 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-etc-cni-netd\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337458 kubelet[3004]: I1113 09:27:32.335527 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckksg\" (UniqueName: \"kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-kube-api-access-ckksg\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337458 kubelet[3004]: I1113 09:27:32.335563 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-net\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337458 kubelet[3004]: I1113 09:27:32.335588 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-bpf-maps\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337458 kubelet[3004]: I1113 09:27:32.335644 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpfdf\" (UniqueName: \"kubernetes.io/projected/2c02529b-41ea-4a82-910a-02b153778918-kube-api-access-jpfdf\") pod \"2c02529b-41ea-4a82-910a-02b153778918\" (UID: \"2c02529b-41ea-4a82-910a-02b153778918\") " Nov 13 09:27:32.337755 kubelet[3004]: I1113 09:27:32.335673 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hostproc\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337755 kubelet[3004]: I1113 09:27:32.335702 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-config-path\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337755 kubelet[3004]: I1113 09:27:32.335728 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-cgroup\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337755 kubelet[3004]: I1113 09:27:32.335768 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-clustermesh-secrets\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.337755 kubelet[3004]: I1113 09:27:32.335802 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c02529b-41ea-4a82-910a-02b153778918-cilium-config-path\") pod \"2c02529b-41ea-4a82-910a-02b153778918\" (UID: \"2c02529b-41ea-4a82-910a-02b153778918\") " Nov 13 09:27:32.337755 kubelet[3004]: I1113 09:27:32.335829 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-run\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.338042 kubelet[3004]: I1113 09:27:32.335906 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-kernel\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.338042 kubelet[3004]: I1113 09:27:32.335935 3004 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hubble-tls\") pod \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\" (UID: \"68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e\") " Nov 13 09:27:32.339355 kubelet[3004]: I1113 09:27:32.328704 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cni-path" (OuterVolumeSpecName: "cni-path") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.340782 kubelet[3004]: I1113 09:27:32.340740 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 09:27:32.340882 kubelet[3004]: I1113 09:27:32.340813 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hostproc" (OuterVolumeSpecName: "hostproc") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.343561 kubelet[3004]: I1113 09:27:32.343131 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.343561 kubelet[3004]: I1113 09:27:32.343189 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.343561 kubelet[3004]: I1113 09:27:32.343232 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.345652 kubelet[3004]: I1113 09:27:32.345254 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 09:27:32.345652 kubelet[3004]: I1113 09:27:32.345317 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.346763 kubelet[3004]: I1113 09:27:32.346730 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-kube-api-access-ckksg" (OuterVolumeSpecName: "kube-api-access-ckksg") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "kube-api-access-ckksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 09:27:32.346932 kubelet[3004]: I1113 09:27:32.346908 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.347102 kubelet[3004]: I1113 09:27:32.347059 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.347393 kubelet[3004]: I1113 09:27:32.347367 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c02529b-41ea-4a82-910a-02b153778918-kube-api-access-jpfdf" (OuterVolumeSpecName: "kube-api-access-jpfdf") pod "2c02529b-41ea-4a82-910a-02b153778918" (UID: "2c02529b-41ea-4a82-910a-02b153778918"). InnerVolumeSpecName "kube-api-access-jpfdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 09:27:32.347533 kubelet[3004]: I1113 09:27:32.347510 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.349418 kubelet[3004]: I1113 09:27:32.349289 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 13 09:27:32.349497 kubelet[3004]: I1113 09:27:32.349425 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" (UID: "68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 09:27:32.351592 kubelet[3004]: I1113 09:27:32.351522 3004 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c02529b-41ea-4a82-910a-02b153778918-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c02529b-41ea-4a82-910a-02b153778918" (UID: "2c02529b-41ea-4a82-910a-02b153778918"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 09:27:32.437165 kubelet[3004]: I1113 09:27:32.437074 3004 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cni-path\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437165 kubelet[3004]: I1113 09:27:32.437139 3004 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-lib-modules\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437165 kubelet[3004]: I1113 09:27:32.437168 3004 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-xtables-lock\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437165 kubelet[3004]: I1113 09:27:32.437183 3004 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-bpf-maps\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437200 3004 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jpfdf\" (UniqueName: \"kubernetes.io/projected/2c02529b-41ea-4a82-910a-02b153778918-kube-api-access-jpfdf\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437226 3004 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-etc-cni-netd\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437257 3004 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ckksg\" (UniqueName: \"kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-kube-api-access-ckksg\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437272 3004 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-net\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437299 3004 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-config-path\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437314 3004 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hostproc\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.437662 kubelet[3004]: I1113 09:27:32.437329 3004 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-cgroup\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.438010 kubelet[3004]: I1113 09:27:32.437354 3004 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c02529b-41ea-4a82-910a-02b153778918-cilium-config-path\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.438010 kubelet[3004]: I1113 09:27:32.437370 3004 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-clustermesh-secrets\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.438010 kubelet[3004]: I1113 09:27:32.437386 3004 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-hubble-tls\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.438010 kubelet[3004]: I1113 09:27:32.437402 3004 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-cilium-run\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.438010 kubelet[3004]: I1113 09:27:32.437418 3004 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e-host-proc-sys-kernel\") on node \"srv-douj7.gb1.brightbox.com\" DevicePath \"\"" Nov 13 09:27:32.722027 kubelet[3004]: E1113 09:27:32.721955 3004 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 09:27:32.994323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dabcd2c0ec8609d179094ece02d139f4372eab6467620a7c3be5ca57d20304c-rootfs.mount: Deactivated successfully. Nov 13 09:27:32.994557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3-rootfs.mount: Deactivated successfully. Nov 13 09:27:32.994755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-847bae2c7f7802730a6751033f271f1bd784401857e433015bf7266b265f21e3-shm.mount: Deactivated successfully. Nov 13 09:27:32.994935 systemd[1]: var-lib-kubelet-pods-68a34ba7\x2d76fb\x2d47c4\x2d80c3\x2d9f84f7ac5e4e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 13 09:27:32.995100 systemd[1]: var-lib-kubelet-pods-2c02529b\x2d41ea\x2d4a82\x2d910a\x2d02b153778918-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djpfdf.mount: Deactivated successfully. Nov 13 09:27:32.995285 systemd[1]: var-lib-kubelet-pods-68a34ba7\x2d76fb\x2d47c4\x2d80c3\x2d9f84f7ac5e4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dckksg.mount: Deactivated successfully. Nov 13 09:27:32.995461 systemd[1]: var-lib-kubelet-pods-68a34ba7\x2d76fb\x2d47c4\x2d80c3\x2d9f84f7ac5e4e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 13 09:27:33.155730 kubelet[3004]: I1113 09:27:33.154679 3004 scope.go:117] "RemoveContainer" containerID="a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6" Nov 13 09:27:33.163720 containerd[1653]: time="2024-11-13T09:27:33.163650540Z" level=info msg="RemoveContainer for \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\"" Nov 13 09:27:33.171123 containerd[1653]: time="2024-11-13T09:27:33.168479780Z" level=info msg="RemoveContainer for \"a1f856f4a10ac8ac55eca891da2117279a9c50a90f63f55cadc7f091eda296c6\" returns successfully" Nov 13 09:27:33.173200 kubelet[3004]: I1113 09:27:33.172569 3004 scope.go:117] "RemoveContainer" containerID="96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb" Nov 13 09:27:33.183258 containerd[1653]: time="2024-11-13T09:27:33.182250991Z" level=info msg="RemoveContainer for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\"" Nov 13 09:27:33.187615 containerd[1653]: time="2024-11-13T09:27:33.187501450Z" level=info msg="RemoveContainer for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" returns successfully" Nov 13 09:27:33.188111 kubelet[3004]: I1113 09:27:33.188022 3004 scope.go:117] "RemoveContainer" containerID="d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8" Nov 13 09:27:33.190827 containerd[1653]: time="2024-11-13T09:27:33.190337165Z" level=info msg="RemoveContainer for \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\"" Nov 13 09:27:33.197373 containerd[1653]: time="2024-11-13T09:27:33.194692894Z" level=info msg="RemoveContainer for \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\" returns successfully" Nov 13 09:27:33.197373 containerd[1653]: time="2024-11-13T09:27:33.196713813Z" level=info msg="RemoveContainer for \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\"" Nov 13 09:27:33.197597 kubelet[3004]: I1113 09:27:33.195070 3004 scope.go:117] "RemoveContainer" containerID="ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e" Nov 13 09:27:33.201692 containerd[1653]: time="2024-11-13T09:27:33.200982338Z" level=info msg="RemoveContainer for \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\" returns successfully" Nov 13 09:27:33.201819 kubelet[3004]: I1113 09:27:33.201326 3004 scope.go:117] "RemoveContainer" containerID="c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7" Nov 13 09:27:33.203320 containerd[1653]: time="2024-11-13T09:27:33.203253316Z" level=info msg="RemoveContainer for \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\"" Nov 13 09:27:33.207272 containerd[1653]: time="2024-11-13T09:27:33.207196768Z" level=info msg="RemoveContainer for \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\" returns successfully" Nov 13 09:27:33.207603 kubelet[3004]: I1113 09:27:33.207542 3004 scope.go:117] "RemoveContainer" containerID="2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f" Nov 13 09:27:33.209961 containerd[1653]: time="2024-11-13T09:27:33.209924792Z" level=info msg="RemoveContainer for \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\"" Nov 13 09:27:33.213248 containerd[1653]: time="2024-11-13T09:27:33.213172007Z" level=info msg="RemoveContainer for \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\" returns successfully" Nov 13 09:27:33.213493 kubelet[3004]: I1113 09:27:33.213439 3004 scope.go:117] "RemoveContainer" containerID="96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb" Nov 13 09:27:33.213793 containerd[1653]: time="2024-11-13T09:27:33.213735941Z" level=error msg="ContainerStatus for \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\": not found" Nov 13 09:27:33.217424 kubelet[3004]: E1113 09:27:33.217172 3004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\": not found" containerID="96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb" Nov 13 09:27:33.221497 kubelet[3004]: I1113 09:27:33.221379 3004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb"} err="failed to get container status \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\": rpc error: code = NotFound desc = an error occurred when try to find container \"96db32a7dfa75613b5b4afe6a6671df794edf74ce7b1f46b7689552d9d0b4ceb\": not found" Nov 13 09:27:33.221497 kubelet[3004]: I1113 09:27:33.221438 3004 scope.go:117] "RemoveContainer" containerID="d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8" Nov 13 09:27:33.222749 containerd[1653]: time="2024-11-13T09:27:33.222271119Z" level=error msg="ContainerStatus for \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\": not found" Nov 13 09:27:33.222854 kubelet[3004]: E1113 09:27:33.222594 3004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\": not found" containerID="d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8" Nov 13 09:27:33.222854 kubelet[3004]: I1113 09:27:33.222642 3004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8"} err="failed to get container status \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3c2ca6f35e2773c2190062bfd431b9fe32d1195237c5e8cc7d901a7dbd2f4a8\": not found" Nov 13 09:27:33.222854 kubelet[3004]: I1113 09:27:33.222669 3004 scope.go:117] "RemoveContainer" containerID="ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e" Nov 13 09:27:33.223548 containerd[1653]: time="2024-11-13T09:27:33.223361526Z" level=error msg="ContainerStatus for \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\": not found" Nov 13 09:27:33.223812 kubelet[3004]: E1113 09:27:33.223676 3004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\": not found" containerID="ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e" Nov 13 09:27:33.223812 kubelet[3004]: I1113 09:27:33.223737 3004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e"} err="failed to get container status \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed95164a8236e450ebd61f1c0fc1264087b8458208a883a77c7c42ce341c4c5e\": not found" Nov 13 09:27:33.223812 kubelet[3004]: I1113 09:27:33.223758 3004 scope.go:117] "RemoveContainer" containerID="c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7" Nov 13 09:27:33.224485 containerd[1653]: time="2024-11-13T09:27:33.224212060Z" level=error msg="ContainerStatus for \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\": not found" Nov 13 09:27:33.224554 kubelet[3004]: E1113 09:27:33.224386 3004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\": not found" containerID="c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7" Nov 13 09:27:33.224554 kubelet[3004]: I1113 09:27:33.224420 3004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7"} err="failed to get container status \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2db18acb52ba70c8f122c8e19282284237dae7f3cb50e85ba50aae1735601f7\": not found" Nov 13 09:27:33.224814 kubelet[3004]: I1113 09:27:33.224440 3004 scope.go:117] "RemoveContainer" containerID="2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f" Nov 13 09:27:33.225388 containerd[1653]: time="2024-11-13T09:27:33.225265634Z" level=error msg="ContainerStatus for \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\": not found" Nov 13 09:27:33.225632 kubelet[3004]: E1113 09:27:33.225538 3004 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\": not found" containerID="2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f" Nov 13 09:27:33.225632 kubelet[3004]: I1113 09:27:33.225576 3004 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f"} err="failed to get container status \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d7026de9a0131caa8f93f04c62048ad0971d1024d5d3523a680925b5705437f\": not found" Nov 13 09:27:34.015029 sshd[4607]: Connection closed by 139.178.68.195 port 43200 Nov 13 09:27:34.016034 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:34.020718 systemd-logind[1628]: Session 27 logged out. Waiting for processes to exit. Nov 13 09:27:34.021064 systemd[1]: sshd@24-10.230.76.174:22-139.178.68.195:43200.service: Deactivated successfully. Nov 13 09:27:34.025731 systemd[1]: session-27.scope: Deactivated successfully. Nov 13 09:27:34.028058 systemd-logind[1628]: Removed session 27. Nov 13 09:27:34.167296 systemd[1]: Started sshd@25-10.230.76.174:22-139.178.68.195:43208.service - OpenSSH per-connection server daemon (139.178.68.195:43208). Nov 13 09:27:34.519862 kubelet[3004]: I1113 09:27:34.519764 3004 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c02529b-41ea-4a82-910a-02b153778918" path="/var/lib/kubelet/pods/2c02529b-41ea-4a82-910a-02b153778918/volumes" Nov 13 09:27:34.520771 kubelet[3004]: I1113 09:27:34.520745 3004 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" path="/var/lib/kubelet/pods/68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e/volumes" Nov 13 09:27:35.069507 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 43208 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:35.072944 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:35.086043 systemd-logind[1628]: New session 28 of user core. Nov 13 09:27:35.092434 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 13 09:27:35.239917 kubelet[3004]: I1113 09:27:35.239606 3004 setters.go:568] "Node became not ready" node="srv-douj7.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-13T09:27:35Z","lastTransitionTime":"2024-11-13T09:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 13 09:27:36.374873 kubelet[3004]: I1113 09:27:36.371794 3004 topology_manager.go:215] "Topology Admit Handler" podUID="6290bcaa-3625-431e-9ddf-7be25815c792" podNamespace="kube-system" podName="cilium-zvlnj" Nov 13 09:27:36.379149 kubelet[3004]: E1113 09:27:36.377509 3004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" containerName="apply-sysctl-overwrites" Nov 13 09:27:36.379149 kubelet[3004]: E1113 09:27:36.377607 3004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" containerName="cilium-agent" Nov 13 09:27:36.379149 kubelet[3004]: E1113 09:27:36.377625 3004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" containerName="mount-bpf-fs" Nov 13 09:27:36.379149 kubelet[3004]: E1113 09:27:36.377637 3004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c02529b-41ea-4a82-910a-02b153778918" containerName="cilium-operator" Nov 13 09:27:36.379149 kubelet[3004]: E1113 09:27:36.377649 3004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" containerName="clean-cilium-state" Nov 13 09:27:36.379149 kubelet[3004]: E1113 09:27:36.377662 3004 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" containerName="mount-cgroup" Nov 13 09:27:36.379149 kubelet[3004]: I1113 09:27:36.377752 3004 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a34ba7-76fb-47c4-80c3-9f84f7ac5e4e" containerName="cilium-agent" Nov 13 09:27:36.379149 kubelet[3004]: I1113 09:27:36.377769 3004 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c02529b-41ea-4a82-910a-02b153778918" containerName="cilium-operator" Nov 13 09:27:36.478954 kubelet[3004]: I1113 09:27:36.478834 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-host-proc-sys-net\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.478954 kubelet[3004]: I1113 09:27:36.478944 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-lib-modules\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.478954 kubelet[3004]: I1113 09:27:36.478980 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6290bcaa-3625-431e-9ddf-7be25815c792-clustermesh-secrets\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.479597 kubelet[3004]: I1113 09:27:36.479027 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-host-proc-sys-kernel\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.479597 kubelet[3004]: I1113 09:27:36.479066 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-xtables-lock\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.479597 kubelet[3004]: I1113 09:27:36.479154 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-cni-path\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.479597 kubelet[3004]: I1113 09:27:36.479195 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-bpf-maps\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.479597 kubelet[3004]: I1113 09:27:36.479234 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6290bcaa-3625-431e-9ddf-7be25815c792-cilium-ipsec-secrets\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.479597 kubelet[3004]: I1113 09:27:36.479267 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6290bcaa-3625-431e-9ddf-7be25815c792-hubble-tls\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.480231 kubelet[3004]: I1113 09:27:36.479921 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszpc\" (UniqueName: \"kubernetes.io/projected/6290bcaa-3625-431e-9ddf-7be25815c792-kube-api-access-dszpc\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.480231 kubelet[3004]: I1113 09:27:36.479969 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-cilium-run\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.480231 kubelet[3004]: I1113 09:27:36.480003 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-etc-cni-netd\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.480231 kubelet[3004]: I1113 09:27:36.480032 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6290bcaa-3625-431e-9ddf-7be25815c792-cilium-config-path\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.480231 kubelet[3004]: I1113 09:27:36.480062 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-hostproc\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.480231 kubelet[3004]: I1113 09:27:36.480092 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6290bcaa-3625-431e-9ddf-7be25815c792-cilium-cgroup\") pod \"cilium-zvlnj\" (UID: \"6290bcaa-3625-431e-9ddf-7be25815c792\") " pod="kube-system/cilium-zvlnj" Nov 13 09:27:36.491990 sshd[4778]: Connection closed by 139.178.68.195 port 43208 Nov 13 09:27:36.490252 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:36.497352 systemd[1]: sshd@25-10.230.76.174:22-139.178.68.195:43208.service: Deactivated successfully. Nov 13 09:27:36.503184 systemd-logind[1628]: Session 28 logged out. Waiting for processes to exit. Nov 13 09:27:36.503995 systemd[1]: session-28.scope: Deactivated successfully. Nov 13 09:27:36.509074 systemd-logind[1628]: Removed session 28. Nov 13 09:27:36.641692 systemd[1]: Started sshd@26-10.230.76.174:22-139.178.68.195:57424.service - OpenSSH per-connection server daemon (139.178.68.195:57424). Nov 13 09:27:36.722787 containerd[1653]: time="2024-11-13T09:27:36.722701023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvlnj,Uid:6290bcaa-3625-431e-9ddf-7be25815c792,Namespace:kube-system,Attempt:0,}" Nov 13 09:27:36.761752 containerd[1653]: time="2024-11-13T09:27:36.761560234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 09:27:36.761752 containerd[1653]: time="2024-11-13T09:27:36.761680385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 09:27:36.761752 containerd[1653]: time="2024-11-13T09:27:36.761703379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:27:36.762679 containerd[1653]: time="2024-11-13T09:27:36.762475642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 09:27:36.825148 containerd[1653]: time="2024-11-13T09:27:36.825069214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvlnj,Uid:6290bcaa-3625-431e-9ddf-7be25815c792,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\"" Nov 13 09:27:36.849424 containerd[1653]: time="2024-11-13T09:27:36.849295578Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 09:27:36.871305 containerd[1653]: time="2024-11-13T09:27:36.871243438Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"703a4bedcde137fff179f36bf06f4b8738014cf7a91d87db4d1aa325b353df32\"" Nov 13 09:27:36.873641 containerd[1653]: time="2024-11-13T09:27:36.873610096Z" level=info msg="StartContainer for \"703a4bedcde137fff179f36bf06f4b8738014cf7a91d87db4d1aa325b353df32\"" Nov 13 09:27:36.952608 containerd[1653]: time="2024-11-13T09:27:36.952084618Z" level=info msg="StartContainer for \"703a4bedcde137fff179f36bf06f4b8738014cf7a91d87db4d1aa325b353df32\" returns successfully" Nov 13 09:27:37.013608 containerd[1653]: time="2024-11-13T09:27:37.013249975Z" level=info msg="shim disconnected" id=703a4bedcde137fff179f36bf06f4b8738014cf7a91d87db4d1aa325b353df32 namespace=k8s.io Nov 13 09:27:37.013608 containerd[1653]: time="2024-11-13T09:27:37.013341560Z" level=warning msg="cleaning up after shim disconnected" id=703a4bedcde137fff179f36bf06f4b8738014cf7a91d87db4d1aa325b353df32 namespace=k8s.io Nov 13 09:27:37.013608 containerd[1653]: time="2024-11-13T09:27:37.013356880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:37.153864 containerd[1653]: time="2024-11-13T09:27:37.153790734Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 09:27:37.174334 containerd[1653]: time="2024-11-13T09:27:37.174269887Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"211a35c457118da93ec8e71905df45525868aeb75afeba4afc032158c64151dc\"" Nov 13 09:27:37.176455 containerd[1653]: time="2024-11-13T09:27:37.175691910Z" level=info msg="StartContainer for \"211a35c457118da93ec8e71905df45525868aeb75afeba4afc032158c64151dc\"" Nov 13 09:27:37.258684 containerd[1653]: time="2024-11-13T09:27:37.258414624Z" level=info msg="StartContainer for \"211a35c457118da93ec8e71905df45525868aeb75afeba4afc032158c64151dc\" returns successfully" Nov 13 09:27:37.298725 containerd[1653]: time="2024-11-13T09:27:37.298594944Z" level=info msg="shim disconnected" id=211a35c457118da93ec8e71905df45525868aeb75afeba4afc032158c64151dc namespace=k8s.io Nov 13 09:27:37.299320 containerd[1653]: time="2024-11-13T09:27:37.298702854Z" level=warning msg="cleaning up after shim disconnected" id=211a35c457118da93ec8e71905df45525868aeb75afeba4afc032158c64151dc namespace=k8s.io Nov 13 09:27:37.299320 containerd[1653]: time="2024-11-13T09:27:37.299051882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:37.543455 sshd[4792]: Accepted publickey for core from 139.178.68.195 port 57424 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:37.545356 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:37.553614 systemd-logind[1628]: New session 29 of user core. Nov 13 09:27:37.560315 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 13 09:27:37.725764 kubelet[3004]: E1113 09:27:37.725677 3004 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 09:27:38.157243 sshd[4963]: Connection closed by 139.178.68.195 port 57424 Nov 13 09:27:38.160068 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:38.162203 containerd[1653]: time="2024-11-13T09:27:38.162080428Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 09:27:38.167994 systemd[1]: sshd@26-10.230.76.174:22-139.178.68.195:57424.service: Deactivated successfully. Nov 13 09:27:38.178897 systemd[1]: session-29.scope: Deactivated successfully. Nov 13 09:27:38.181434 systemd-logind[1628]: Session 29 logged out. Waiting for processes to exit. Nov 13 09:27:38.185135 systemd-logind[1628]: Removed session 29. Nov 13 09:27:38.202646 containerd[1653]: time="2024-11-13T09:27:38.200067000Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"46e2bc70ace1f8a05c533ab214579ecb8e93a3e9f917e962acd5852f7f43575a\"" Nov 13 09:27:38.203263 containerd[1653]: time="2024-11-13T09:27:38.203229102Z" level=info msg="StartContainer for \"46e2bc70ace1f8a05c533ab214579ecb8e93a3e9f917e962acd5852f7f43575a\"" Nov 13 09:27:38.297463 containerd[1653]: time="2024-11-13T09:27:38.297391889Z" level=info msg="StartContainer for \"46e2bc70ace1f8a05c533ab214579ecb8e93a3e9f917e962acd5852f7f43575a\" returns successfully" Nov 13 09:27:38.308394 systemd[1]: Started sshd@27-10.230.76.174:22-139.178.68.195:57428.service - OpenSSH per-connection server daemon (139.178.68.195:57428). Nov 13 09:27:38.346758 containerd[1653]: time="2024-11-13T09:27:38.346667107Z" level=info msg="shim disconnected" id=46e2bc70ace1f8a05c533ab214579ecb8e93a3e9f917e962acd5852f7f43575a namespace=k8s.io Nov 13 09:27:38.347104 containerd[1653]: time="2024-11-13T09:27:38.347077484Z" level=warning msg="cleaning up after shim disconnected" id=46e2bc70ace1f8a05c533ab214579ecb8e93a3e9f917e962acd5852f7f43575a namespace=k8s.io Nov 13 09:27:38.347215 containerd[1653]: time="2024-11-13T09:27:38.347178999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:38.591997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46e2bc70ace1f8a05c533ab214579ecb8e93a3e9f917e962acd5852f7f43575a-rootfs.mount: Deactivated successfully. Nov 13 09:27:39.163489 containerd[1653]: time="2024-11-13T09:27:39.163407663Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 09:27:39.194078 containerd[1653]: time="2024-11-13T09:27:39.194009391Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9db17ff4fbe9c9998463b2a809990b2bf5494cdb1b069fce524e643220939dc\"" Nov 13 09:27:39.196807 containerd[1653]: time="2024-11-13T09:27:39.196774092Z" level=info msg="StartContainer for \"b9db17ff4fbe9c9998463b2a809990b2bf5494cdb1b069fce524e643220939dc\"" Nov 13 09:27:39.218865 sshd[5001]: Accepted publickey for core from 139.178.68.195 port 57428 ssh2: RSA SHA256:PEkR6TwfQ+33gzVeyWP9Jiy96hkY0vaI5PBZPRuFgao Nov 13 09:27:39.220489 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 09:27:39.235575 systemd-logind[1628]: New session 30 of user core. Nov 13 09:27:39.239213 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 13 09:27:39.294733 containerd[1653]: time="2024-11-13T09:27:39.294674905Z" level=info msg="StartContainer for \"b9db17ff4fbe9c9998463b2a809990b2bf5494cdb1b069fce524e643220939dc\" returns successfully" Nov 13 09:27:39.322875 containerd[1653]: time="2024-11-13T09:27:39.322619005Z" level=info msg="shim disconnected" id=b9db17ff4fbe9c9998463b2a809990b2bf5494cdb1b069fce524e643220939dc namespace=k8s.io Nov 13 09:27:39.322875 containerd[1653]: time="2024-11-13T09:27:39.322706717Z" level=warning msg="cleaning up after shim disconnected" id=b9db17ff4fbe9c9998463b2a809990b2bf5494cdb1b069fce524e643220939dc namespace=k8s.io Nov 13 09:27:39.322875 containerd[1653]: time="2024-11-13T09:27:39.322721937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 09:27:39.592023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9db17ff4fbe9c9998463b2a809990b2bf5494cdb1b069fce524e643220939dc-rootfs.mount: Deactivated successfully. Nov 13 09:27:40.173391 containerd[1653]: time="2024-11-13T09:27:40.173310743Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 09:27:40.204251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193534404.mount: Deactivated successfully. Nov 13 09:27:40.208261 containerd[1653]: time="2024-11-13T09:27:40.208204580Z" level=info msg="CreateContainer within sandbox \"5d943bc39b4cacaf32ce445cca27521684daaa46ef868079cbe58094ea36700a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"01edaab45715a510abc7b4bc61664c5753dc8a8d399850de9e7e8ea5ac56d8d3\"" Nov 13 09:27:40.210438 containerd[1653]: time="2024-11-13T09:27:40.210338542Z" level=info msg="StartContainer for \"01edaab45715a510abc7b4bc61664c5753dc8a8d399850de9e7e8ea5ac56d8d3\"" Nov 13 09:27:40.292890 containerd[1653]: time="2024-11-13T09:27:40.291688492Z" level=info msg="StartContainer for \"01edaab45715a510abc7b4bc61664c5753dc8a8d399850de9e7e8ea5ac56d8d3\" returns successfully" Nov 13 09:27:40.593409 systemd[1]: run-containerd-runc-k8s.io-01edaab45715a510abc7b4bc61664c5753dc8a8d399850de9e7e8ea5ac56d8d3-runc.wUO8oK.mount: Deactivated successfully. Nov 13 09:27:40.942078 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 13 09:27:41.203923 kubelet[3004]: I1113 09:27:41.201087 3004 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zvlnj" podStartSLOduration=5.200984881 podStartE2EDuration="5.200984881s" podCreationTimestamp="2024-11-13 09:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 09:27:41.195598353 +0000 UTC m=+148.966853000" watchObservedRunningTime="2024-11-13 09:27:41.200984881 +0000 UTC m=+148.972239523" Nov 13 09:27:44.559593 systemd-networkd[1258]: lxc_health: Link UP Nov 13 09:27:44.560147 systemd-networkd[1258]: lxc_health: Gained carrier Nov 13 09:27:45.722886 systemd-networkd[1258]: lxc_health: Gained IPv6LL Nov 13 09:27:51.273104 sshd[5047]: Connection closed by 139.178.68.195 port 57428 Nov 13 09:27:51.274957 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Nov 13 09:27:51.280595 systemd-logind[1628]: Session 30 logged out. Waiting for processes to exit. Nov 13 09:27:51.282768 systemd[1]: sshd@27-10.230.76.174:22-139.178.68.195:57428.service: Deactivated successfully. Nov 13 09:27:51.287292 systemd[1]: session-30.scope: Deactivated successfully. Nov 13 09:27:51.288740 systemd-logind[1628]: Removed session 30.