Jul 2 07:58:50.057851 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:58:50.057877 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:58:50.057890 kernel: BIOS-provided physical RAM map: Jul 2 07:58:50.057897 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 07:58:50.057903 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 07:58:50.057910 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 07:58:50.057918 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jul 2 07:58:50.057924 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jul 2 07:58:50.057933 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 07:58:50.057940 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 07:58:50.057947 kernel: NX (Execute Disable) protection: active Jul 2 07:58:50.057953 kernel: SMBIOS 2.8 present. Jul 2 07:58:50.057960 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 2 07:58:50.057967 kernel: Hypervisor detected: KVM Jul 2 07:58:50.057975 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:58:50.057985 kernel: kvm-clock: cpu 0, msr 48192001, primary cpu clock Jul 2 07:58:50.057992 kernel: kvm-clock: using sched offset of 4051641261 cycles Jul 2 07:58:50.058000 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:58:50.058011 kernel: tsc: Detected 2494.140 MHz processor Jul 2 07:58:50.058020 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:58:50.058027 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:58:50.058034 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jul 2 07:58:50.058042 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:58:50.058052 kernel: ACPI: Early table checksum verification disabled Jul 2 07:58:50.058060 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jul 2 07:58:50.058067 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058075 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058082 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058091 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 07:58:50.058101 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058108 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058116 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058126 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:58:50.058133 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 2 07:58:50.058144 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 2 07:58:50.058156 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 07:58:50.058168 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 2 07:58:50.058180 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 2 07:58:50.058191 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 2 07:58:50.058201 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 2 07:58:50.058216 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:58:50.058225 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:58:50.058232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 07:58:50.058241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 07:58:50.058249 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jul 2 07:58:50.058269 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jul 2 07:58:50.058284 kernel: Zone ranges: Jul 2 07:58:50.058292 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:58:50.058300 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jul 2 07:58:50.058308 kernel: Normal empty Jul 2 07:58:50.065421 kernel: Movable zone start for each node Jul 2 07:58:50.065474 kernel: Early memory node ranges Jul 2 07:58:50.065487 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 07:58:50.065497 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jul 2 07:58:50.065506 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jul 2 07:58:50.065524 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:58:50.065604 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 07:58:50.065616 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jul 2 07:58:50.065633 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 07:58:50.070407 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:58:50.070428 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:58:50.070438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:58:50.070447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:58:50.070455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:58:50.070473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:58:50.070482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:58:50.070490 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:58:50.070498 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:58:50.070517 kernel: TSC deadline timer available Jul 2 07:58:50.070526 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:58:50.070535 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 07:58:50.070546 kernel: Booting paravirtualized kernel on KVM Jul 2 07:58:50.070556 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:58:50.070567 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:58:50.070576 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:58:50.070587 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:58:50.070598 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:58:50.070611 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Jul 2 07:58:50.070641 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 07:58:50.070652 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jul 2 07:58:50.070674 kernel: Policy zone: DMA32 Jul 2 07:58:50.070688 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:58:50.070707 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:58:50.070718 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:58:50.070730 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:58:50.070741 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:58:50.070753 kernel: Memory: 1973264K/2096600K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 123076K reserved, 0K cma-reserved) Jul 2 07:58:50.070764 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:58:50.070776 kernel: Kernel/User page tables isolation: enabled Jul 2 07:58:50.070795 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:58:50.070824 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:58:50.070832 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:58:50.070842 kernel: rcu: RCU event tracing is enabled. Jul 2 07:58:50.070850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:58:50.070859 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:58:50.070868 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:58:50.070880 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:58:50.070891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:58:50.070903 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:58:50.070919 kernel: random: crng init done Jul 2 07:58:50.070931 kernel: Console: colour VGA+ 80x25 Jul 2 07:58:50.070942 kernel: printk: console [tty0] enabled Jul 2 07:58:50.070954 kernel: printk: console [ttyS0] enabled Jul 2 07:58:50.070968 kernel: ACPI: Core revision 20210730 Jul 2 07:58:50.070983 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:58:50.070997 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:58:50.071009 kernel: x2apic enabled Jul 2 07:58:50.071019 kernel: Switched APIC routing to physical x2apic. Jul 2 07:58:50.071042 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:58:50.071053 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 07:58:50.071065 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 2 07:58:50.071077 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 07:58:50.071091 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 07:58:50.071103 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:58:50.071115 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:58:50.071129 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:58:50.071142 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:58:50.071159 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 07:58:50.071183 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:58:50.071194 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:58:50.071209 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 07:58:50.071222 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:58:50.071235 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:58:50.071256 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:58:50.071273 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:58:50.071283 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:58:50.071292 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:58:50.071306 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:58:50.071337 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:58:50.071352 kernel: LSM: Security Framework initializing Jul 2 07:58:50.071367 kernel: SELinux: Initializing. Jul 2 07:58:50.071382 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 07:58:50.071395 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 07:58:50.071413 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 2 07:58:50.071427 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 2 07:58:50.071442 kernel: signal: max sigframe size: 1776 Jul 2 07:58:50.071458 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:58:50.071472 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:58:50.071481 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:58:50.071491 kernel: x86: Booting SMP configuration: Jul 2 07:58:50.071505 kernel: .... node #0, CPUs: #1 Jul 2 07:58:50.071519 kernel: kvm-clock: cpu 1, msr 48192041, secondary cpu clock Jul 2 07:58:50.071535 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Jul 2 07:58:50.071555 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:58:50.071569 kernel: smpboot: Max logical packages: 1 Jul 2 07:58:50.071583 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 2 07:58:50.071598 kernel: devtmpfs: initialized Jul 2 07:58:50.071613 kernel: x86/mm: Memory block size: 128MB Jul 2 07:58:50.071628 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:58:50.071644 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:58:50.071658 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:58:50.071672 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:58:50.071691 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:58:50.071706 kernel: audit: type=2000 audit(1719907128.988:1): state=initialized audit_enabled=0 res=1 Jul 2 07:58:50.071721 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:58:50.071735 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:58:50.071749 kernel: cpuidle: using governor menu Jul 2 07:58:50.071764 kernel: ACPI: bus type PCI registered Jul 2 07:58:50.071779 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:58:50.071794 kernel: dca service started, version 1.12.1 Jul 2 07:58:50.071808 kernel: PCI: Using configuration type 1 for base access Jul 2 07:58:50.071826 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:58:50.071842 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:58:50.071855 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:58:50.071866 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:58:50.071879 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:58:50.071915 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:58:50.071931 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:58:50.071944 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:58:50.071957 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:58:50.071976 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:58:50.071989 kernel: ACPI: Interpreter enabled Jul 2 07:58:50.072002 kernel: ACPI: PM: (supports S0 S5) Jul 2 07:58:50.072016 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:58:50.072029 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:58:50.072043 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:58:50.072055 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:58:50.072593 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:58:50.072779 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:58:50.072797 kernel: acpiphp: Slot [3] registered Jul 2 07:58:50.072809 kernel: acpiphp: Slot [4] registered Jul 2 07:58:50.072821 kernel: acpiphp: Slot [5] registered Jul 2 07:58:50.072832 kernel: acpiphp: Slot [6] registered Jul 2 07:58:50.072844 kernel: acpiphp: Slot [7] registered Jul 2 07:58:50.072856 kernel: acpiphp: Slot [8] registered Jul 2 07:58:50.072870 kernel: acpiphp: Slot [9] registered Jul 2 07:58:50.072890 kernel: acpiphp: Slot [10] registered Jul 2 07:58:50.072901 kernel: acpiphp: Slot [11] registered Jul 2 07:58:50.072913 kernel: acpiphp: Slot [12] registered Jul 2 07:58:50.072925 kernel: acpiphp: Slot [13] registered Jul 2 07:58:50.072938 kernel: acpiphp: Slot [14] registered Jul 2 07:58:50.072952 kernel: acpiphp: Slot [15] registered Jul 2 07:58:50.072964 kernel: acpiphp: Slot [16] registered Jul 2 07:58:50.072973 kernel: acpiphp: Slot [17] registered Jul 2 07:58:50.072981 kernel: acpiphp: Slot [18] registered Jul 2 07:58:50.072990 kernel: acpiphp: Slot [19] registered Jul 2 07:58:50.073002 kernel: acpiphp: Slot [20] registered Jul 2 07:58:50.073010 kernel: acpiphp: Slot [21] registered Jul 2 07:58:50.073023 kernel: acpiphp: Slot [22] registered Jul 2 07:58:50.073035 kernel: acpiphp: Slot [23] registered Jul 2 07:58:50.073047 kernel: acpiphp: Slot [24] registered Jul 2 07:58:50.073059 kernel: acpiphp: Slot [25] registered Jul 2 07:58:50.073071 kernel: acpiphp: Slot [26] registered Jul 2 07:58:50.073079 kernel: acpiphp: Slot [27] registered Jul 2 07:58:50.073098 kernel: acpiphp: Slot [28] registered Jul 2 07:58:50.073111 kernel: acpiphp: Slot [29] registered Jul 2 07:58:50.073124 kernel: acpiphp: Slot [30] registered Jul 2 07:58:50.073136 kernel: acpiphp: Slot [31] registered Jul 2 07:58:50.073151 kernel: PCI host bridge to bus 0000:00 Jul 2 07:58:50.073329 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:58:50.073434 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:58:50.073522 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:58:50.073608 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 07:58:50.073696 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 07:58:50.073778 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:58:50.073895 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:58:50.074029 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:58:50.074179 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:58:50.077864 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 2 07:58:50.078091 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:58:50.078198 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:58:50.078438 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:58:50.078592 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:58:50.078723 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 2 07:58:50.078871 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 2 07:58:50.079014 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:58:50.079145 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 07:58:50.079300 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 07:58:50.079429 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 07:58:50.079529 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 07:58:50.079689 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 07:58:50.079796 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 2 07:58:50.079994 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 07:58:50.080155 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:58:50.080385 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:58:50.080551 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 2 07:58:50.083612 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 2 07:58:50.083813 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 07:58:50.084043 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:58:50.084220 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 2 07:58:50.088602 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 2 07:58:50.088750 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 07:58:50.088862 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 2 07:58:50.088964 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 2 07:58:50.089075 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 2 07:58:50.089197 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 07:58:50.089359 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:58:50.089561 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:58:50.089703 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 2 07:58:50.089857 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 07:58:50.089979 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:58:50.090087 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 2 07:58:50.090180 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 2 07:58:50.090313 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 2 07:58:50.090481 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 07:58:50.090632 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 2 07:58:50.090767 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 2 07:58:50.090785 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:58:50.090800 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:58:50.090814 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:58:50.090837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:58:50.090851 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:58:50.090863 kernel: iommu: Default domain type: Translated Jul 2 07:58:50.090877 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:58:50.091034 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:58:50.091179 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:58:50.091349 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:58:50.091368 kernel: vgaarb: loaded Jul 2 07:58:50.091388 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:58:50.091401 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:58:50.091413 kernel: PTP clock support registered Jul 2 07:58:50.091425 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:58:50.091438 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:58:50.091450 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 07:58:50.091465 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jul 2 07:58:50.091478 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:58:50.091490 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:58:50.091508 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:58:50.091522 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:58:50.091535 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:58:50.091549 kernel: pnp: PnP ACPI init Jul 2 07:58:50.091561 kernel: pnp: PnP ACPI: found 4 devices Jul 2 07:58:50.091575 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:58:50.091588 kernel: NET: Registered PF_INET protocol family Jul 2 07:58:50.091601 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:58:50.091617 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 07:58:50.091637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:58:50.091652 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:58:50.091667 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 07:58:50.091680 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 07:58:50.091693 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 07:58:50.091705 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 07:58:50.091717 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:58:50.091731 kernel: NET: Registered PF_XDP protocol family Jul 2 07:58:50.091918 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:58:50.092057 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:58:50.092181 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:58:50.093484 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 07:58:50.093732 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 07:58:50.093897 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:58:50.094034 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:58:50.094163 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:58:50.094184 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 07:58:50.094278 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 32469 usecs Jul 2 07:58:50.094290 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:58:50.094299 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:58:50.094308 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 07:58:50.094328 kernel: Initialise system trusted keyrings Jul 2 07:58:50.094337 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 07:58:50.095428 kernel: Key type asymmetric registered Jul 2 07:58:50.095447 kernel: Asymmetric key parser 'x509' registered Jul 2 07:58:50.095462 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:58:50.095471 kernel: io scheduler mq-deadline registered Jul 2 07:58:50.095480 kernel: io scheduler kyber registered Jul 2 07:58:50.095493 kernel: io scheduler bfq registered Jul 2 07:58:50.095506 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:58:50.095522 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 07:58:50.095532 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:58:50.095540 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:58:50.095549 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:58:50.095558 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:58:50.095575 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:58:50.095584 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:58:50.095597 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:58:50.095790 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 07:58:50.095807 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:58:50.095926 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 07:58:50.096049 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T07:58:49 UTC (1719907129) Jul 2 07:58:50.096142 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 07:58:50.096157 kernel: intel_pstate: CPU model not supported Jul 2 07:58:50.096170 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:58:50.096180 kernel: Segment Routing with IPv6 Jul 2 07:58:50.096197 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:58:50.096206 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:58:50.096215 kernel: Key type dns_resolver registered Jul 2 07:58:50.096224 kernel: IPI shorthand broadcast: enabled Jul 2 07:58:50.096234 kernel: sched_clock: Marking stable (677452810, 107832639)->(890065113, -104779664) Jul 2 07:58:50.096246 kernel: registered taskstats version 1 Jul 2 07:58:50.096255 kernel: Loading compiled-in X.509 certificates Jul 2 07:58:50.096263 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:58:50.096272 kernel: Key type .fscrypt registered Jul 2 07:58:50.096286 kernel: Key type fscrypt-provisioning registered Jul 2 07:58:50.096295 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:58:50.098390 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:58:50.098405 kernel: ima: No architecture policies found Jul 2 07:58:50.098414 kernel: clk: Disabling unused clocks Jul 2 07:58:50.098428 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:58:50.098438 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:58:50.098447 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:58:50.098456 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:58:50.098464 kernel: Run /init as init process Jul 2 07:58:50.098473 kernel: with arguments: Jul 2 07:58:50.098502 kernel: /init Jul 2 07:58:50.098513 kernel: with environment: Jul 2 07:58:50.098522 kernel: HOME=/ Jul 2 07:58:50.098540 kernel: TERM=linux Jul 2 07:58:50.098554 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:58:50.098575 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:58:50.098595 systemd[1]: Detected virtualization kvm. Jul 2 07:58:50.098609 systemd[1]: Detected architecture x86-64. Jul 2 07:58:50.098618 systemd[1]: Running in initrd. Jul 2 07:58:50.098627 systemd[1]: No hostname configured, using default hostname. Jul 2 07:58:50.098638 systemd[1]: Hostname set to . Jul 2 07:58:50.098655 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:58:50.098664 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:58:50.098674 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:58:50.098683 systemd[1]: Reached target cryptsetup.target. Jul 2 07:58:50.098692 systemd[1]: Reached target paths.target. Jul 2 07:58:50.098701 systemd[1]: Reached target slices.target. Jul 2 07:58:50.098710 systemd[1]: Reached target swap.target. Jul 2 07:58:50.098720 systemd[1]: Reached target timers.target. Jul 2 07:58:50.098735 systemd[1]: Listening on iscsid.socket. Jul 2 07:58:50.098746 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:58:50.098759 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:58:50.098769 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:58:50.098778 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:58:50.098788 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:58:50.098797 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:58:50.098809 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:58:50.098818 systemd[1]: Reached target sockets.target. Jul 2 07:58:50.098828 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:58:50.098848 systemd[1]: Finished network-cleanup.service. Jul 2 07:58:50.098857 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:58:50.098867 systemd[1]: Starting systemd-journald.service... Jul 2 07:58:50.098882 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:58:50.098902 systemd[1]: Starting systemd-resolved.service... Jul 2 07:58:50.098915 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:58:50.098929 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:58:50.098939 kernel: audit: type=1130 audit(1719907130.065:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.098949 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:58:50.098967 systemd-journald[184]: Journal started Jul 2 07:58:50.099060 systemd-journald[184]: Runtime Journal (/run/log/journal/4834573019b14fdda3702d821f62701a) is 4.9M, max 39.5M, 34.5M free. Jul 2 07:58:50.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.089557 systemd-modules-load[185]: Inserted module 'overlay' Jul 2 07:58:50.136934 systemd[1]: Started systemd-journald.service. Jul 2 07:58:50.136965 kernel: audit: type=1130 audit(1719907130.130:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.104098 systemd-resolved[186]: Positive Trust Anchors: Jul 2 07:58:50.141213 kernel: audit: type=1130 audit(1719907130.136:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.104111 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:58:50.157931 kernel: audit: type=1130 audit(1719907130.140:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.157962 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:58:50.157975 kernel: audit: type=1130 audit(1719907130.148:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.157995 kernel: Bridge firewalling registered Jul 2 07:58:50.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.104146 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:58:50.107691 systemd-resolved[186]: Defaulting to hostname 'linux'. Jul 2 07:58:50.136949 systemd[1]: Started systemd-resolved.service. Jul 2 07:58:50.141123 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:58:50.149039 systemd[1]: Reached target nss-lookup.target. Jul 2 07:58:50.154367 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:58:50.155964 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 2 07:58:50.157304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:58:50.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.170112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:58:50.174615 kernel: audit: type=1130 audit(1719907130.170:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.192474 kernel: SCSI subsystem initialized Jul 2 07:58:50.193471 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:58:50.196004 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:58:50.206230 kernel: audit: type=1130 audit(1719907130.193:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.210366 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:58:50.210450 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:58:50.213383 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:58:50.217675 systemd-modules-load[185]: Inserted module 'dm_multipath' Jul 2 07:58:50.218632 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:58:50.240964 kernel: audit: type=1130 audit(1719907130.219:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.241079 dracut-cmdline[202]: dracut-dracut-053 Jul 2 07:58:50.241079 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:58:50.225096 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:58:50.247788 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:58:50.253219 kernel: audit: type=1130 audit(1719907130.248:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.346356 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:58:50.372421 kernel: iscsi: registered transport (tcp) Jul 2 07:58:50.402361 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:58:50.402437 kernel: QLogic iSCSI HBA Driver Jul 2 07:58:50.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.468145 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:58:50.471638 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:58:50.546438 kernel: raid6: avx2x4 gen() 13466 MB/s Jul 2 07:58:50.563394 kernel: raid6: avx2x4 xor() 6132 MB/s Jul 2 07:58:50.580431 kernel: raid6: avx2x2 gen() 14176 MB/s Jul 2 07:58:50.597401 kernel: raid6: avx2x2 xor() 14854 MB/s Jul 2 07:58:50.614423 kernel: raid6: avx2x1 gen() 10444 MB/s Jul 2 07:58:50.631430 kernel: raid6: avx2x1 xor() 13943 MB/s Jul 2 07:58:50.648452 kernel: raid6: sse2x4 gen() 9955 MB/s Jul 2 07:58:50.665426 kernel: raid6: sse2x4 xor() 6073 MB/s Jul 2 07:58:50.682387 kernel: raid6: sse2x2 gen() 9714 MB/s Jul 2 07:58:50.700513 kernel: raid6: sse2x2 xor() 6840 MB/s Jul 2 07:58:50.717401 kernel: raid6: sse2x1 gen() 7690 MB/s Jul 2 07:58:50.735305 kernel: raid6: sse2x1 xor() 4809 MB/s Jul 2 07:58:50.735421 kernel: raid6: using algorithm avx2x2 gen() 14176 MB/s Jul 2 07:58:50.735436 kernel: raid6: .... xor() 14854 MB/s, rmw enabled Jul 2 07:58:50.736263 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:58:50.755377 kernel: xor: automatically using best checksumming function avx Jul 2 07:58:50.890376 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:58:50.906881 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:58:50.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.907000 audit: BPF prog-id=7 op=LOAD Jul 2 07:58:50.907000 audit: BPF prog-id=8 op=LOAD Jul 2 07:58:50.908792 systemd[1]: Starting systemd-udevd.service... Jul 2 07:58:50.929778 systemd-udevd[384]: Using default interface naming scheme 'v252'. Jul 2 07:58:50.937792 systemd[1]: Started systemd-udevd.service. Jul 2 07:58:50.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:50.944001 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:58:50.969467 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Jul 2 07:58:51.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:51.021688 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:58:51.024134 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:58:51.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:51.108058 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:58:51.203355 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:58:51.203433 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 07:58:51.211349 kernel: scsi host0: Virtio SCSI HBA Jul 2 07:58:51.213657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:58:51.213732 kernel: GPT:9289727 != 125829119 Jul 2 07:58:51.213745 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:58:51.214608 kernel: GPT:9289727 != 125829119 Jul 2 07:58:51.215976 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:58:51.216026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:58:51.228837 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jul 2 07:58:51.233930 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:58:51.234029 kernel: AES CTR mode by8 optimization enabled Jul 2 07:58:51.286247 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:58:51.366254 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (431) Jul 2 07:58:51.370283 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:58:51.374356 kernel: ACPI: bus type USB registered Jul 2 07:58:51.374437 kernel: libata version 3.00 loaded. Jul 2 07:58:51.372914 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:58:51.378387 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:58:51.380461 kernel: usbcore: registered new interface driver usbfs Jul 2 07:58:51.383567 kernel: usbcore: registered new interface driver hub Jul 2 07:58:51.383691 kernel: usbcore: registered new device driver usb Jul 2 07:58:51.383713 kernel: scsi host1: ata_piix Jul 2 07:58:51.390368 kernel: scsi host2: ata_piix Jul 2 07:58:51.390578 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 2 07:58:51.393150 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 2 07:58:51.399796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:58:51.402291 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Jul 2 07:58:51.409130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:58:51.419124 systemd[1]: Starting disk-uuid.service... Jul 2 07:58:51.429845 disk-uuid[500]: Primary Header is updated. Jul 2 07:58:51.429845 disk-uuid[500]: Secondary Entries is updated. Jul 2 07:58:51.429845 disk-uuid[500]: Secondary Header is updated. Jul 2 07:58:51.441367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:58:51.455371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:58:51.556366 kernel: ehci-pci: EHCI PCI platform driver Jul 2 07:58:51.581361 kernel: uhci_hcd: USB Universal Host Controller Interface driver Jul 2 07:58:51.604127 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 2 07:58:51.604391 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 2 07:58:51.604562 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 2 07:58:51.607417 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Jul 2 07:58:51.609741 kernel: hub 1-0:1.0: USB hub found Jul 2 07:58:51.610092 kernel: hub 1-0:1.0: 2 ports detected Jul 2 07:58:52.461100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:58:52.461257 disk-uuid[503]: The operation has completed successfully. Jul 2 07:58:52.532559 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:58:52.533604 systemd[1]: Finished disk-uuid.service. Jul 2 07:58:52.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:52.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:52.535745 systemd[1]: Starting verity-setup.service... Jul 2 07:58:52.560504 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:58:52.648732 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:58:52.651470 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:58:52.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:52.653954 systemd[1]: Finished verity-setup.service. Jul 2 07:58:52.769384 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:58:52.770742 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:58:52.771526 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:58:52.773048 systemd[1]: Starting ignition-setup.service... Jul 2 07:58:52.774827 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:58:52.799366 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:58:52.799510 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:58:52.799531 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:58:52.821266 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:58:52.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:52.833274 systemd[1]: Finished ignition-setup.service. Jul 2 07:58:52.836195 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:58:52.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:52.974082 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:58:52.975000 audit: BPF prog-id=9 op=LOAD Jul 2 07:58:52.977183 systemd[1]: Starting systemd-networkd.service... Jul 2 07:58:53.017946 systemd-networkd[687]: lo: Link UP Jul 2 07:58:53.017986 systemd-networkd[687]: lo: Gained carrier Jul 2 07:58:53.019739 systemd-networkd[687]: Enumeration completed Jul 2 07:58:53.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.020030 systemd[1]: Started systemd-networkd.service. Jul 2 07:58:53.020778 systemd-networkd[687]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:58:53.021491 systemd[1]: Reached target network.target. Jul 2 07:58:53.022566 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 2 07:58:53.024662 systemd-networkd[687]: eth1: Link UP Jul 2 07:58:53.024668 systemd-networkd[687]: eth1: Gained carrier Jul 2 07:58:53.029598 systemd[1]: Starting iscsiuio.service... Jul 2 07:58:53.044853 systemd-networkd[687]: eth0: Link UP Jul 2 07:58:53.046073 systemd-networkd[687]: eth0: Gained carrier Jul 2 07:58:53.058821 systemd-networkd[687]: eth0: DHCPv4 address 146.190.152.6/20, gateway 146.190.144.1 acquired from 169.254.169.253 Jul 2 07:58:53.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.062831 systemd[1]: Started iscsiuio.service. Jul 2 07:58:53.065271 systemd[1]: Starting iscsid.service... Jul 2 07:58:53.067849 systemd-networkd[687]: eth1: DHCPv4 address 10.124.0.9/20 acquired from 169.254.169.253 Jul 2 07:58:53.073493 ignition[611]: Ignition 2.14.0 Jul 2 07:58:53.073511 ignition[611]: Stage: fetch-offline Jul 2 07:58:53.077316 iscsid[692]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:58:53.077316 iscsid[692]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:58:53.077316 iscsid[692]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:58:53.077316 iscsid[692]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:58:53.077316 iscsid[692]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:58:53.077316 iscsid[692]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:58:53.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.073718 ignition[611]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:53.081596 systemd[1]: Started iscsid.service. Jul 2 07:58:53.073788 ignition[611]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:53.096685 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:58:53.087729 ignition[611]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:53.098905 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:58:53.087923 ignition[611]: parsed url from cmdline: "" Jul 2 07:58:53.102074 systemd[1]: Starting ignition-fetch.service... Jul 2 07:58:53.087930 ignition[611]: no config URL provided Jul 2 07:58:53.087940 ignition[611]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:58:53.087957 ignition[611]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:58:53.087969 ignition[611]: failed to fetch config: resource requires networking Jul 2 07:58:53.088677 ignition[611]: Ignition finished successfully Jul 2 07:58:53.122266 ignition[694]: Ignition 2.14.0 Jul 2 07:58:53.122282 ignition[694]: Stage: fetch Jul 2 07:58:53.122870 ignition[694]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:53.122899 ignition[694]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:53.126482 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:53.126779 ignition[694]: parsed url from cmdline: "" Jul 2 07:58:53.127880 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:58:53.126785 ignition[694]: no config URL provided Jul 2 07:58:53.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.126795 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:58:53.126809 ignition[694]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:58:53.126851 ignition[694]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 2 07:58:53.132647 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:58:53.133635 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:58:53.134894 systemd[1]: Reached target remote-fs.target. Jul 2 07:58:53.139701 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:58:53.159488 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:58:53.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.174373 ignition[694]: GET result: OK Jul 2 07:58:53.174556 ignition[694]: parsing config with SHA512: a02dd1d1f01a034e86f20f6becc7c9262cff44f1313166f3eb55a689917ec72e551c37513da9984f0db8d1ab3941f413da7f0b4ae3f0647a63ab44ff6a0fd548 Jul 2 07:58:53.184202 unknown[694]: fetched base config from "system" Jul 2 07:58:53.185073 unknown[694]: fetched base config from "system" Jul 2 07:58:53.185816 unknown[694]: fetched user config from "digitalocean" Jul 2 07:58:53.187201 ignition[694]: fetch: fetch complete Jul 2 07:58:53.187903 ignition[694]: fetch: fetch passed Jul 2 07:58:53.188651 ignition[694]: Ignition finished successfully Jul 2 07:58:53.191391 systemd[1]: Finished ignition-fetch.service. Jul 2 07:58:53.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.194159 systemd[1]: Starting ignition-kargs.service... Jul 2 07:58:53.222652 ignition[712]: Ignition 2.14.0 Jul 2 07:58:53.224079 ignition[712]: Stage: kargs Jul 2 07:58:53.225231 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:53.226202 ignition[712]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:53.228815 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:53.231818 ignition[712]: kargs: kargs passed Jul 2 07:58:53.232690 ignition[712]: Ignition finished successfully Jul 2 07:58:53.234785 systemd[1]: Finished ignition-kargs.service. Jul 2 07:58:53.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.237509 systemd[1]: Starting ignition-disks.service... Jul 2 07:58:53.257350 ignition[717]: Ignition 2.14.0 Jul 2 07:58:53.257366 ignition[717]: Stage: disks Jul 2 07:58:53.257574 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:53.257601 ignition[717]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:53.261420 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:53.263961 ignition[717]: disks: disks passed Jul 2 07:58:53.264159 ignition[717]: Ignition finished successfully Jul 2 07:58:53.266812 systemd[1]: Finished ignition-disks.service. Jul 2 07:58:53.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.267516 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:58:53.268498 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:58:53.269454 systemd[1]: Reached target local-fs.target. Jul 2 07:58:53.270196 systemd[1]: Reached target sysinit.target. Jul 2 07:58:53.270635 systemd[1]: Reached target basic.target. Jul 2 07:58:53.273356 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:58:53.302053 systemd-fsck[724]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:58:53.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.305889 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:58:53.307681 systemd[1]: Mounting sysroot.mount... Jul 2 07:58:53.325352 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:58:53.327315 systemd[1]: Mounted sysroot.mount. Jul 2 07:58:53.328986 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:58:53.332214 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:58:53.335611 systemd[1]: Starting flatcar-digitalocean-network.service... Jul 2 07:58:53.340388 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 07:58:53.343787 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:58:53.345937 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:58:53.351174 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:58:53.356502 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:58:53.368722 initrd-setup-root[736]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:58:53.387805 initrd-setup-root[744]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:58:53.399928 initrd-setup-root[752]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:58:53.414678 initrd-setup-root[762]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:58:53.518349 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:58:53.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.520554 systemd[1]: Starting ignition-mount.service... Jul 2 07:58:53.522737 systemd[1]: Starting sysroot-boot.service... Jul 2 07:58:53.534752 coreos-metadata[730]: Jul 02 07:58:53.534 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 07:58:53.547493 bash[781]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:58:53.558201 coreos-metadata[730]: Jul 02 07:58:53.558 INFO Fetch successful Jul 2 07:58:53.573789 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 2 07:58:53.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.573983 systemd[1]: Finished flatcar-digitalocean-network.service. Jul 2 07:58:53.582225 ignition[783]: INFO : Ignition 2.14.0 Jul 2 07:58:53.583436 ignition[783]: INFO : Stage: mount Jul 2 07:58:53.584598 ignition[783]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:53.585621 ignition[783]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:53.589470 coreos-metadata[731]: Jul 02 07:58:53.589 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 07:58:53.593147 ignition[783]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:53.596245 ignition[783]: INFO : mount: mount passed Jul 2 07:58:53.597065 ignition[783]: INFO : Ignition finished successfully Jul 2 07:58:53.599707 systemd[1]: Finished ignition-mount.service. Jul 2 07:58:53.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.604613 systemd[1]: Finished sysroot-boot.service. Jul 2 07:58:53.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.606747 coreos-metadata[731]: Jul 02 07:58:53.605 INFO Fetch successful Jul 2 07:58:53.611097 coreos-metadata[731]: Jul 02 07:58:53.610 INFO wrote hostname ci-3510.3.5-2-fce33301fd to /sysroot/etc/hostname Jul 2 07:58:53.612532 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 07:58:53.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:53.681075 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:58:53.691374 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (790) Jul 2 07:58:53.694719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:58:53.694839 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:58:53.694854 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:58:53.704477 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:58:53.718935 systemd[1]: Starting ignition-files.service... Jul 2 07:58:53.754075 ignition[810]: INFO : Ignition 2.14.0 Jul 2 07:58:53.754075 ignition[810]: INFO : Stage: files Jul 2 07:58:53.756066 ignition[810]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:53.756066 ignition[810]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:53.758293 ignition[810]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:53.763167 ignition[810]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:58:53.765741 ignition[810]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:58:53.765741 ignition[810]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:58:53.769190 ignition[810]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:58:53.770396 ignition[810]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:58:53.772558 unknown[810]: wrote ssh authorized keys file for user: core Jul 2 07:58:53.773675 ignition[810]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:58:53.773675 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:58:53.773675 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:58:53.806444 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:58:53.868883 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:58:53.871062 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:58:53.871062 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:58:54.311468 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:58:54.425160 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:58:54.426628 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:58:54.427807 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:58:54.427807 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:58:54.427807 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:58:54.427807 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:58:54.427807 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:58:54.427807 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:58:54.434489 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 07:58:54.429723 systemd-networkd[687]: eth1: Gained IPv6LL Jul 2 07:58:54.696988 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:58:55.069791 systemd-networkd[687]: eth0: Gained IPv6LL Jul 2 07:58:55.135193 ignition[810]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:58:55.135193 ignition[810]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:58:55.135193 ignition[810]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:58:55.135193 ignition[810]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:58:55.140070 ignition[810]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:58:55.146633 ignition[810]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:58:55.146633 ignition[810]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:58:55.146633 ignition[810]: INFO : files: files passed Jul 2 07:58:55.146633 ignition[810]: INFO : Ignition finished successfully Jul 2 07:58:55.157699 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 2 07:58:55.157740 kernel: audit: type=1130 audit(1719907135.148:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.148368 systemd[1]: Finished ignition-files.service. Jul 2 07:58:55.150431 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:58:55.156736 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:58:55.158206 systemd[1]: Starting ignition-quench.service... Jul 2 07:58:55.164815 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:58:55.165968 systemd[1]: Finished ignition-quench.service. Jul 2 07:58:55.173386 kernel: audit: type=1130 audit(1719907135.166:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.173424 kernel: audit: type=1131 audit(1719907135.166:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.173521 initrd-setup-root-after-ignition[835]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:58:55.178282 kernel: audit: type=1130 audit(1719907135.173:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.168222 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:58:55.174084 systemd[1]: Reached target ignition-complete.target. Jul 2 07:58:55.180475 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:58:55.204675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:58:55.204798 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:58:55.214635 kernel: audit: type=1130 audit(1719907135.205:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.214672 kernel: audit: type=1131 audit(1719907135.205:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.206138 systemd[1]: Reached target initrd-fs.target. Jul 2 07:58:55.215111 systemd[1]: Reached target initrd.target. Jul 2 07:58:55.216059 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:58:55.217679 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:58:55.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.236975 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:58:55.242454 kernel: audit: type=1130 audit(1719907135.235:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.243387 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:58:55.260197 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:58:55.261887 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:58:55.263469 systemd[1]: Stopped target timers.target. Jul 2 07:58:55.264919 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:58:55.265956 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:58:55.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.281684 systemd[1]: Stopped target initrd.target. Jul 2 07:58:55.285577 kernel: audit: type=1131 audit(1719907135.279:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.284522 systemd[1]: Stopped target basic.target. Jul 2 07:58:55.284987 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:58:55.286217 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:58:55.287391 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:58:55.288384 systemd[1]: Stopped target remote-fs.target. Jul 2 07:58:55.289523 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:58:55.290530 systemd[1]: Stopped target sysinit.target. Jul 2 07:58:55.291753 systemd[1]: Stopped target local-fs.target. Jul 2 07:58:55.292706 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:58:55.293552 systemd[1]: Stopped target swap.target. Jul 2 07:58:55.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.294574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:58:55.300722 kernel: audit: type=1131 audit(1719907135.295:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.294749 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:58:55.296212 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:58:55.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.300546 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:58:55.306759 kernel: audit: type=1131 audit(1719907135.301:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.300822 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:58:55.302185 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:58:55.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.302431 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:58:55.307624 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:58:55.307826 systemd[1]: Stopped ignition-files.service. Jul 2 07:58:55.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.309257 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 07:58:55.309500 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 07:58:55.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.312603 systemd[1]: Stopping ignition-mount.service... Jul 2 07:58:55.329828 iscsid[692]: iscsid shutting down. Jul 2 07:58:55.313530 systemd[1]: Stopping iscsid.service... Jul 2 07:58:55.315618 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:58:55.316251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:58:55.316560 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:58:55.317453 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:58:55.317635 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:58:55.323297 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:58:55.323470 systemd[1]: Stopped iscsid.service. Jul 2 07:58:55.324568 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:58:55.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.324682 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:58:55.330475 systemd[1]: Stopping iscsiuio.service... Jul 2 07:58:55.338697 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:58:55.338823 systemd[1]: Stopped iscsiuio.service. Jul 2 07:58:55.347512 ignition[848]: INFO : Ignition 2.14.0 Jul 2 07:58:55.347512 ignition[848]: INFO : Stage: umount Jul 2 07:58:55.347512 ignition[848]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:58:55.347512 ignition[848]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Jul 2 07:58:55.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.355291 ignition[848]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 07:58:55.355291 ignition[848]: INFO : umount: umount passed Jul 2 07:58:55.355291 ignition[848]: INFO : Ignition finished successfully Jul 2 07:58:55.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.347641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:58:55.352858 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:58:55.352987 systemd[1]: Stopped ignition-mount.service. Jul 2 07:58:55.354447 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:58:55.354554 systemd[1]: Stopped ignition-disks.service. Jul 2 07:58:55.355688 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:58:55.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.355752 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:58:55.356823 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:58:55.356895 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:58:55.358754 systemd[1]: Stopped target network.target. Jul 2 07:58:55.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.359112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:58:55.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.359185 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:58:55.359624 systemd[1]: Stopped target paths.target. Jul 2 07:58:55.359992 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:58:55.363624 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:58:55.364447 systemd[1]: Stopped target slices.target. Jul 2 07:58:55.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.364788 systemd[1]: Stopped target sockets.target. Jul 2 07:58:55.365154 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:58:55.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.365213 systemd[1]: Closed iscsid.socket. Jul 2 07:58:55.381000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:58:55.365614 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:58:55.365659 systemd[1]: Closed iscsiuio.socket. Jul 2 07:58:55.366015 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:58:55.366066 systemd[1]: Stopped ignition-setup.service. Jul 2 07:58:55.366823 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:58:55.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.367735 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:58:55.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.368959 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:58:55.369114 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:58:55.370246 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:58:55.370389 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:58:55.370406 systemd-networkd[687]: eth1: DHCPv6 lease lost Jul 2 07:58:55.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.374846 systemd-networkd[687]: eth0: DHCPv6 lease lost Jul 2 07:58:55.394000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:58:55.376711 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:58:55.376858 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:58:55.379647 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:58:55.379845 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:58:55.381462 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:58:55.381508 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:58:55.383556 systemd[1]: Stopping network-cleanup.service... Jul 2 07:58:55.384221 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:58:55.384381 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:58:55.385208 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:58:55.385277 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:58:55.388356 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:58:55.388455 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:58:55.394089 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:58:55.396596 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:58:55.402685 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:58:55.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.402936 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:58:55.404655 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:58:55.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.404808 systemd[1]: Stopped network-cleanup.service. Jul 2 07:58:55.405887 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:58:55.405951 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:58:55.406981 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:58:55.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.407027 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:58:55.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.407789 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:58:55.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.407858 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:58:55.408605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:58:55.408651 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:58:55.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.409621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:58:55.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.409681 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:58:55.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.420734 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:58:55.430350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:58:55.430478 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:58:55.431517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:58:55.431580 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:58:55.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:55.432161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:58:55.432230 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:58:55.434179 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:58:55.435114 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:58:55.435223 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:58:55.436405 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:58:55.438282 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:58:55.455219 systemd[1]: Switching root. Jul 2 07:58:55.477143 systemd-journald[184]: Journal stopped Jul 2 07:58:59.209724 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 2 07:58:59.209824 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:58:59.209841 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:58:59.209854 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:58:59.209866 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:58:59.209877 kernel: SELinux: policy capability open_perms=1 Jul 2 07:58:59.209903 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:58:59.209916 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:58:59.209927 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:58:59.209944 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:58:59.209955 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:58:59.209967 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:58:59.211518 systemd[1]: Successfully loaded SELinux policy in 50.910ms. Jul 2 07:58:59.211563 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.574ms. Jul 2 07:58:59.211593 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:58:59.211606 systemd[1]: Detected virtualization kvm. Jul 2 07:58:59.211619 systemd[1]: Detected architecture x86-64. Jul 2 07:58:59.211636 systemd[1]: Detected first boot. Jul 2 07:58:59.211653 systemd[1]: Hostname set to . Jul 2 07:58:59.211671 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:58:59.211691 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:58:59.211711 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:58:59.211740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:58:59.211759 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:58:59.211782 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:58:59.211799 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:58:59.211817 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:58:59.211829 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:58:59.211848 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:58:59.211860 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:58:59.211880 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 07:58:59.211892 systemd[1]: Created slice system-getty.slice. Jul 2 07:58:59.211904 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:58:59.211916 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:58:59.211928 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:58:59.211941 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:58:59.211954 systemd[1]: Created slice user.slice. Jul 2 07:58:59.211968 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:58:59.211985 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:58:59.212013 systemd[1]: Set up automount boot.automount. Jul 2 07:58:59.212032 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:58:59.212045 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:58:59.212060 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:58:59.212073 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:58:59.212090 systemd[1]: Reached target integritysetup.target. Jul 2 07:58:59.212110 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:58:59.212123 systemd[1]: Reached target remote-fs.target. Jul 2 07:58:59.212135 systemd[1]: Reached target slices.target. Jul 2 07:58:59.212147 systemd[1]: Reached target swap.target. Jul 2 07:58:59.212159 systemd[1]: Reached target torcx.target. Jul 2 07:58:59.212171 systemd[1]: Reached target veritysetup.target. Jul 2 07:58:59.212184 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:58:59.212195 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:58:59.212208 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:58:59.212220 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:58:59.212238 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:58:59.212251 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:58:59.212264 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:58:59.212276 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:58:59.214769 systemd[1]: Mounting media.mount... Jul 2 07:58:59.214804 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:58:59.214824 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:58:59.214898 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:58:59.214912 systemd[1]: Mounting tmp.mount... Jul 2 07:58:59.214938 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:58:59.214951 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:58:59.214964 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:58:59.214976 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:58:59.214989 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:58:59.215001 systemd[1]: Starting modprobe@drm.service... Jul 2 07:58:59.215013 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:58:59.215024 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:58:59.215036 systemd[1]: Starting modprobe@loop.service... Jul 2 07:58:59.215055 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:58:59.215068 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:58:59.215081 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:58:59.215093 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:58:59.215106 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:58:59.215118 systemd[1]: Stopped systemd-journald.service. Jul 2 07:58:59.216664 systemd[1]: Starting systemd-journald.service... Jul 2 07:58:59.216697 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:58:59.217162 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:58:59.217209 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:58:59.217224 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:58:59.217242 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:58:59.217261 systemd[1]: Stopped verity-setup.service. Jul 2 07:58:59.217280 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:58:59.217299 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:58:59.218126 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:58:59.218161 systemd[1]: Mounted media.mount. Jul 2 07:58:59.218178 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:58:59.218202 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:58:59.218215 systemd[1]: Mounted tmp.mount. Jul 2 07:58:59.218236 kernel: loop: module loaded Jul 2 07:58:59.218250 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:58:59.218263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:58:59.218275 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:58:59.218288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:58:59.218301 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:58:59.218315 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:58:59.219422 systemd[1]: Finished modprobe@drm.service. Jul 2 07:58:59.219445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:58:59.219523 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:58:59.219581 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:58:59.219598 systemd[1]: Finished modprobe@loop.service. Jul 2 07:58:59.219633 systemd-journald[950]: Journal started Jul 2 07:58:59.219706 systemd-journald[950]: Runtime Journal (/run/log/journal/4834573019b14fdda3702d821f62701a) is 4.9M, max 39.5M, 34.5M free. Jul 2 07:58:55.640000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:58:55.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:58:55.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:58:55.713000 audit: BPF prog-id=10 op=LOAD Jul 2 07:58:55.713000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:58:55.713000 audit: BPF prog-id=11 op=LOAD Jul 2 07:58:55.713000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:58:55.823000 audit[880]: AVC avc: denied { associate } for pid=880 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:58:55.823000 audit[880]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=863 pid=880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:58:55.823000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:58:55.825000 audit[880]: AVC avc: denied { associate } for pid=880 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:58:55.825000 audit[880]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859a9 a2=1ed a3=0 items=2 ppid=863 pid=880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:58:59.222760 systemd[1]: Started systemd-journald.service. Jul 2 07:58:55.825000 audit: CWD cwd="/" Jul 2 07:58:55.825000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:58:55.825000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:58:55.825000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:58:59.009000 audit: BPF prog-id=12 op=LOAD Jul 2 07:58:59.009000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:58:59.009000 audit: BPF prog-id=13 op=LOAD Jul 2 07:58:59.009000 audit: BPF prog-id=14 op=LOAD Jul 2 07:58:59.009000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:58:59.009000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:58:59.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.023000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:58:59.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.139000 audit: BPF prog-id=15 op=LOAD Jul 2 07:58:59.139000 audit: BPF prog-id=16 op=LOAD Jul 2 07:58:59.139000 audit: BPF prog-id=17 op=LOAD Jul 2 07:58:59.139000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:58:59.139000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:58:59.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.203000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:58:59.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.203000 audit[950]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff2674adf0 a2=4000 a3=7fff2674ae8c items=0 ppid=1 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:58:59.203000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:58:59.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.006830 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:58:55.818722 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:58:59.006848 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:58:55.819424 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:58:59.011558 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:58:55.819456 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:58:59.222183 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:58:55.819510 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:58:59.222821 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:58:55.819527 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:58:59.223482 systemd[1]: Reached target network-pre.target. Jul 2 07:58:55.819594 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:58:59.226986 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:58:55.819615 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:58:59.227491 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:58:55.819971 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:58:59.231978 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:58:55.820031 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:58:55.820048 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:58:55.822211 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:58:59.234213 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:58:55.822273 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:58:59.234865 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:58:55.822305 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:58:59.236562 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:58:55.822727 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:58:59.237451 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:58:55.822770 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:58:55.822786 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:58:58.523170 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:58Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:58:58.523537 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:58Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:58:58.523681 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:58Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:58:58.524573 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:58Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:58:58.524777 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:58Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:58:58.524894 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2024-07-02T07:58:58Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:58:59.246459 kernel: fuse: init (API version 7.34) Jul 2 07:58:59.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.245034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:58:59.245243 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:58:59.246038 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:58:59.246582 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:58:59.248883 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:58:59.252859 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:58:59.256554 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:58:59.272006 systemd-journald[950]: Time spent on flushing to /var/log/journal/4834573019b14fdda3702d821f62701a is 57.764ms for 1148 entries. Jul 2 07:58:59.272006 systemd-journald[950]: System Journal (/var/log/journal/4834573019b14fdda3702d821f62701a) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:58:59.337111 systemd-journald[950]: Received client request to flush runtime journal. Jul 2 07:58:59.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.272260 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:58:59.273862 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:58:59.300290 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:58:59.336197 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:58:59.338251 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:58:59.339184 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:58:59.350145 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:58:59.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.351969 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:58:59.363135 udevadm[991]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 07:58:59.382517 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:58:59.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:58:59.384298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:58:59.420435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:58:59.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.024357 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:59:00.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.025000 audit: BPF prog-id=18 op=LOAD Jul 2 07:59:00.025000 audit: BPF prog-id=19 op=LOAD Jul 2 07:59:00.025000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:59:00.025000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:59:00.027350 systemd[1]: Starting systemd-udevd.service... Jul 2 07:59:00.056569 systemd-udevd[994]: Using default interface naming scheme 'v252'. Jul 2 07:59:00.096648 systemd[1]: Started systemd-udevd.service. Jul 2 07:59:00.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.098000 audit: BPF prog-id=20 op=LOAD Jul 2 07:59:00.099255 systemd[1]: Starting systemd-networkd.service... Jul 2 07:59:00.111000 audit: BPF prog-id=21 op=LOAD Jul 2 07:59:00.111000 audit: BPF prog-id=22 op=LOAD Jul 2 07:59:00.111000 audit: BPF prog-id=23 op=LOAD Jul 2 07:59:00.113568 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:59:00.171440 systemd[1]: Started systemd-userdbd.service. Jul 2 07:59:00.176631 kernel: kauditd_printk_skb: 107 callbacks suppressed Jul 2 07:59:00.176739 kernel: audit: type=1130 audit(1719907140.171:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.184338 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:59:00.184421 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:00.184639 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:59:00.186173 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:59:00.188555 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:59:00.192046 systemd[1]: Starting modprobe@loop.service... Jul 2 07:59:00.193877 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:59:00.193963 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:59:00.194064 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:00.194841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:59:00.195077 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:59:00.204344 kernel: audit: type=1130 audit(1719907140.195:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.204469 kernel: audit: type=1131 audit(1719907140.195:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.195978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:59:00.196515 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:59:00.205148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:59:00.205513 systemd[1]: Finished modprobe@loop.service. Jul 2 07:59:00.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.213045 kernel: audit: type=1130 audit(1719907140.204:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.207189 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:59:00.207242 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:59:00.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.218401 kernel: audit: type=1131 audit(1719907140.204:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.229712 kernel: audit: type=1130 audit(1719907140.205:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.229818 kernel: audit: type=1131 audit(1719907140.205:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.270117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:59:00.308716 systemd-networkd[998]: lo: Link UP Jul 2 07:59:00.308731 systemd-networkd[998]: lo: Gained carrier Jul 2 07:59:00.309341 systemd-networkd[998]: Enumeration completed Jul 2 07:59:00.309485 systemd[1]: Started systemd-networkd.service. Jul 2 07:59:00.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.310990 systemd-networkd[998]: eth1: Configuring with /run/systemd/network/10-a6:1a:e5:4a:6e:1b.network. Jul 2 07:59:00.314350 kernel: audit: type=1130 audit(1719907140.309:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.314366 systemd-networkd[998]: eth0: Configuring with /run/systemd/network/10-26:9e:2a:69:6b:1c.network. Jul 2 07:59:00.315091 systemd-networkd[998]: eth1: Link UP Jul 2 07:59:00.315102 systemd-networkd[998]: eth1: Gained carrier Jul 2 07:59:00.316866 systemd-networkd[998]: eth0: Link UP Jul 2 07:59:00.316879 systemd-networkd[998]: eth0: Gained carrier Jul 2 07:59:00.342411 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:59:00.354362 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:59:00.371000 audit[1002]: AVC avc: denied { confidentiality } for pid=1002 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:59:00.389346 kernel: audit: type=1400 audit(1719907140.371:156): avc: denied { confidentiality } for pid=1002 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:59:00.371000 audit[1002]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5592da95ce50 a1=3207c a2=7f7fb8a5fbc5 a3=5 items=108 ppid=994 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:59:00.410346 kernel: audit: type=1300 audit(1719907140.371:156): arch=c000003e syscall=175 success=yes exit=0 a0=5592da95ce50 a1=3207c a2=7f7fb8a5fbc5 a3=5 items=108 ppid=994 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:59:00.371000 audit: CWD cwd="/" Jul 2 07:59:00.371000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=1 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=2 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=3 name=(null) inode=14442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=4 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=5 name=(null) inode=14443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=6 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=7 name=(null) inode=14444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=8 name=(null) inode=14444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=9 name=(null) inode=14445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=10 name=(null) inode=14444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=11 name=(null) inode=14446 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=12 name=(null) inode=14444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=13 name=(null) inode=14447 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=14 name=(null) inode=14444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=15 name=(null) inode=14448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=16 name=(null) inode=14444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=17 name=(null) inode=14449 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=18 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=19 name=(null) inode=14450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=20 name=(null) inode=14450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=21 name=(null) inode=14451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=22 name=(null) inode=14450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=23 name=(null) inode=14452 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=24 name=(null) inode=14450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=25 name=(null) inode=14453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=26 name=(null) inode=14450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=27 name=(null) inode=14454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=28 name=(null) inode=14450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=29 name=(null) inode=14455 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=30 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=31 name=(null) inode=14456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=32 name=(null) inode=14456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=33 name=(null) inode=14457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=34 name=(null) inode=14456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=35 name=(null) inode=14458 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=36 name=(null) inode=14456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=37 name=(null) inode=14459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=38 name=(null) inode=14456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=39 name=(null) inode=14460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=40 name=(null) inode=14456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=41 name=(null) inode=14461 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=42 name=(null) inode=14441 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=43 name=(null) inode=14462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=44 name=(null) inode=14462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=45 name=(null) inode=14463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=46 name=(null) inode=14462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=47 name=(null) inode=14464 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=48 name=(null) inode=14462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=49 name=(null) inode=14465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=50 name=(null) inode=14462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=51 name=(null) inode=14466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=52 name=(null) inode=14462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=53 name=(null) inode=14467 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=55 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=56 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=57 name=(null) inode=14469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=58 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=59 name=(null) inode=14470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=60 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=61 name=(null) inode=14471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=62 name=(null) inode=14471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=63 name=(null) inode=14472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=64 name=(null) inode=14471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=65 name=(null) inode=14473 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=66 name=(null) inode=14471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=67 name=(null) inode=14474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=68 name=(null) inode=14471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=69 name=(null) inode=14475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=70 name=(null) inode=14471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=71 name=(null) inode=14476 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=72 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=73 name=(null) inode=14477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=74 name=(null) inode=14477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=75 name=(null) inode=14478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=76 name=(null) inode=14477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=77 name=(null) inode=14479 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=78 name=(null) inode=14477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=79 name=(null) inode=14480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=80 name=(null) inode=14477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=81 name=(null) inode=14481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=82 name=(null) inode=14477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=83 name=(null) inode=14482 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=84 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=85 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=86 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=87 name=(null) inode=14484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=88 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=89 name=(null) inode=14485 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=90 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=91 name=(null) inode=14486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=92 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=93 name=(null) inode=14487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=94 name=(null) inode=14483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=95 name=(null) inode=14488 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=96 name=(null) inode=14468 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=97 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=98 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=99 name=(null) inode=14490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=100 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=101 name=(null) inode=14491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=102 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=103 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=104 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=105 name=(null) inode=14493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=106 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PATH item=107 name=(null) inode=14494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:59:00.371000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:59:00.433346 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 07:59:00.444467 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:59:00.451399 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:59:00.568471 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:59:00.586034 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:59:00.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.589005 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:59:00.614991 lvm[1032]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:59:00.642955 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:59:00.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.643600 systemd[1]: Reached target cryptsetup.target. Jul 2 07:59:00.651536 systemd[1]: Starting lvm2-activation.service... Jul 2 07:59:00.660202 lvm[1033]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:59:00.694445 systemd[1]: Finished lvm2-activation.service. Jul 2 07:59:00.695259 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:59:00.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.697898 systemd[1]: Mounting media-configdrive.mount... Jul 2 07:59:00.700660 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:59:00.700721 systemd[1]: Reached target machines.target. Jul 2 07:59:00.703065 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:59:00.719352 kernel: ISO 9660 Extensions: RRIP_1991A Jul 2 07:59:00.721624 systemd[1]: Mounted media-configdrive.mount. Jul 2 07:59:00.722306 systemd[1]: Reached target local-fs.target. Jul 2 07:59:00.725429 systemd[1]: Starting ldconfig.service... Jul 2 07:59:00.728847 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:59:00.728936 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:00.730999 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:59:00.736175 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:59:00.744534 systemd[1]: Starting systemd-sysext.service... Jul 2 07:59:00.750473 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:59:00.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.751585 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1040 (bootctl) Jul 2 07:59:00.754158 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:59:00.772790 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:59:00.781474 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:59:00.781773 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:59:00.819737 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 07:59:00.898204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:59:00.899295 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:59:00.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.928425 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:59:00.948464 systemd-fsck[1047]: fsck.fat 4.2 (2021-01-31) Jul 2 07:59:00.948464 systemd-fsck[1047]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 07:59:00.957412 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 07:59:00.959486 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:59:00.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:00.962493 systemd[1]: Mounting boot.mount... Jul 2 07:59:00.986181 systemd[1]: Mounted boot.mount. Jul 2 07:59:01.001460 (sd-sysext)[1051]: Using extensions 'kubernetes'. Jul 2 07:59:01.002052 (sd-sysext)[1051]: Merged extensions into '/usr'. Jul 2 07:59:01.012153 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:59:01.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.039584 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.041960 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:59:01.045646 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:59:01.048846 systemd[1]: Starting modprobe@loop.service... Jul 2 07:59:01.050531 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.050763 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:01.052075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:59:01.052374 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:59:01.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.053503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:59:01.053658 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:59:01.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.055273 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:59:01.055480 systemd[1]: Finished modprobe@loop.service. Jul 2 07:59:01.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.056946 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:59:01.057088 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.185410 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.188315 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:59:01.189026 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.207893 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:59:01.210564 systemd[1]: Finished systemd-sysext.service. Jul 2 07:59:01.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.213954 systemd[1]: Starting ensure-sysext.service... Jul 2 07:59:01.216611 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:59:01.234572 systemd[1]: Reloading. Jul 2 07:59:01.263171 systemd-tmpfiles[1059]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:59:01.269779 systemd-tmpfiles[1059]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:59:01.278648 systemd-tmpfiles[1059]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:59:01.371388 ldconfig[1039]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:59:01.405696 systemd-networkd[998]: eth1: Gained IPv6LL Jul 2 07:59:01.424712 /usr/lib/systemd/system-generators/torcx-generator[1081]: time="2024-07-02T07:59:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:59:01.427549 /usr/lib/systemd/system-generators/torcx-generator[1081]: time="2024-07-02T07:59:01Z" level=info msg="torcx already run" Jul 2 07:59:01.569092 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:59:01.569596 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:59:01.599919 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:59:01.661639 systemd-networkd[998]: eth0: Gained IPv6LL Jul 2 07:59:01.677000 audit: BPF prog-id=24 op=LOAD Jul 2 07:59:01.678000 audit: BPF prog-id=25 op=LOAD Jul 2 07:59:01.678000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:59:01.678000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:59:01.679000 audit: BPF prog-id=26 op=LOAD Jul 2 07:59:01.680000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:59:01.681000 audit: BPF prog-id=27 op=LOAD Jul 2 07:59:01.681000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:59:01.681000 audit: BPF prog-id=28 op=LOAD Jul 2 07:59:01.682000 audit: BPF prog-id=29 op=LOAD Jul 2 07:59:01.682000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:59:01.682000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:59:01.684000 audit: BPF prog-id=30 op=LOAD Jul 2 07:59:01.684000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:59:01.684000 audit: BPF prog-id=31 op=LOAD Jul 2 07:59:01.684000 audit: BPF prog-id=32 op=LOAD Jul 2 07:59:01.684000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:59:01.685000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:59:01.689640 systemd[1]: Finished ldconfig.service. Jul 2 07:59:01.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.693832 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:59:01.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.699755 systemd[1]: Starting audit-rules.service... Jul 2 07:59:01.703046 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:59:01.719049 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:59:01.721000 audit: BPF prog-id=33 op=LOAD Jul 2 07:59:01.723718 systemd[1]: Starting systemd-resolved.service... Jul 2 07:59:01.728000 audit: BPF prog-id=34 op=LOAD Jul 2 07:59:01.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.730497 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:59:01.733084 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:59:01.773000 audit[1132]: SYSTEM_BOOT pid=1132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.735556 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:59:01.739189 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:59:01.743162 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.743565 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.747265 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:59:01.751894 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:59:01.757640 systemd[1]: Starting modprobe@loop.service... Jul 2 07:59:01.758260 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.758455 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:01.758588 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:59:01.758673 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.759799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:59:01.761240 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:59:01.762661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:59:01.762878 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:59:01.764033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:59:01.764231 systemd[1]: Finished modprobe@loop.service. Jul 2 07:59:01.765450 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:59:01.765634 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.768631 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.768936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.771503 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:59:01.775550 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:59:01.780681 systemd[1]: Starting modprobe@loop.service... Jul 2 07:59:01.781308 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.781580 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:01.781866 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:59:01.782002 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.791919 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.792591 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.796055 systemd[1]: Starting modprobe@drm.service... Jul 2 07:59:01.797252 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.797698 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:01.801157 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:59:01.802667 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:59:01.803105 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:59:01.805465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:59:01.805758 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:59:01.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.810605 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:59:01.811789 systemd[1]: Finished ensure-sysext.service. Jul 2 07:59:01.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.831791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:59:01.832040 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:59:01.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.832917 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:59:01.833469 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:59:01.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.834871 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:59:01.835042 systemd[1]: Finished modprobe@loop.service. Jul 2 07:59:01.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.835713 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.837172 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:59:01.837412 systemd[1]: Finished modprobe@drm.service. Jul 2 07:59:01.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.840418 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:59:01.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.843634 systemd[1]: Starting systemd-update-done.service... Jul 2 07:59:01.864915 systemd[1]: Finished systemd-update-done.service. Jul 2 07:59:01.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:59:01.886000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:59:01.886000 audit[1155]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc07923c0 a2=420 a3=0 items=0 ppid=1126 pid=1155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:59:01.886000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:59:01.887475 augenrules[1155]: No rules Jul 2 07:59:01.888151 systemd[1]: Finished audit-rules.service. Jul 2 07:59:01.920679 systemd-resolved[1130]: Positive Trust Anchors: Jul 2 07:59:01.920700 systemd-resolved[1130]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:59:01.920734 systemd-resolved[1130]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:59:01.928033 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:59:01.928972 systemd[1]: Reached target time-set.target. Jul 2 07:59:01.932973 systemd-resolved[1130]: Using system hostname 'ci-3510.3.5-2-fce33301fd'. Jul 2 07:59:01.935574 systemd[1]: Started systemd-resolved.service. Jul 2 07:59:01.936565 systemd[1]: Reached target network.target. Jul 2 07:59:01.937096 systemd[1]: Reached target network-online.target. Jul 2 07:59:01.937662 systemd[1]: Reached target nss-lookup.target. Jul 2 07:59:01.938166 systemd[1]: Reached target sysinit.target. Jul 2 07:59:01.938824 systemd[1]: Started motdgen.path. Jul 2 07:59:01.939378 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:59:01.940162 systemd[1]: Started logrotate.timer. Jul 2 07:59:01.940772 systemd[1]: Started mdadm.timer. Jul 2 07:59:01.941238 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:59:01.941891 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:59:01.941954 systemd[1]: Reached target paths.target. Jul 2 07:59:01.942441 systemd[1]: Reached target timers.target. Jul 2 07:59:01.943287 systemd[1]: Listening on dbus.socket. Jul 2 07:59:01.945455 systemd[1]: Starting docker.socket... Jul 2 07:59:01.951732 systemd[1]: Listening on sshd.socket. Jul 2 07:59:01.952635 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:01.953529 systemd[1]: Listening on docker.socket. Jul 2 07:59:01.954240 systemd[1]: Reached target sockets.target. Jul 2 07:59:01.954784 systemd[1]: Reached target basic.target. Jul 2 07:59:01.955383 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.955429 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:59:01.958183 systemd[1]: Starting containerd.service... Jul 2 07:59:01.960859 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 07:59:01.968251 systemd[1]: Starting dbus.service... Jul 2 07:59:01.971986 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:59:01.974652 systemd[1]: Starting extend-filesystems.service... Jul 2 07:59:02.014585 jq[1168]: false Jul 2 07:59:01.975367 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:59:01.979727 systemd[1]: Starting kubelet.service... Jul 2 07:59:01.985112 systemd[1]: Starting motdgen.service... Jul 2 07:59:01.990154 systemd[1]: Starting prepare-helm.service... Jul 2 07:59:01.993384 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:59:01.997981 systemd[1]: Starting sshd-keygen.service... Jul 2 07:59:02.003760 systemd[1]: Starting systemd-logind.service... Jul 2 07:59:02.004718 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:59:02.004839 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:59:02.005712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:59:02.009394 systemd[1]: Starting update-engine.service... Jul 2 07:59:02.015847 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:59:02.020689 systemd-timesyncd[1131]: Contacted time server 50.218.103.254:123 (0.flatcar.pool.ntp.org). Jul 2 07:59:02.020787 systemd-timesyncd[1131]: Initial clock synchronization to Tue 2024-07-02 07:59:02.262012 UTC. Jul 2 07:59:02.022185 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:59:02.022583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:59:02.028304 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:59:02.030728 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:59:02.046165 jq[1182]: true Jul 2 07:59:02.089989 tar[1185]: linux-amd64/helm Jul 2 07:59:02.097018 dbus-daemon[1166]: [system] SELinux support is enabled Jul 2 07:59:02.097513 jq[1189]: true Jul 2 07:59:02.097984 systemd[1]: Started dbus.service. Jul 2 07:59:02.101014 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:59:02.101060 systemd[1]: Reached target system-config.target. Jul 2 07:59:02.102143 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:59:02.102182 systemd[1]: Reached target user-config.target. Jul 2 07:59:02.103079 extend-filesystems[1169]: Found loop1 Jul 2 07:59:02.113937 extend-filesystems[1169]: Found vda Jul 2 07:59:02.122634 extend-filesystems[1169]: Found vda1 Jul 2 07:59:02.123555 extend-filesystems[1169]: Found vda2 Jul 2 07:59:02.124193 extend-filesystems[1169]: Found vda3 Jul 2 07:59:02.124866 extend-filesystems[1169]: Found usr Jul 2 07:59:02.126243 extend-filesystems[1169]: Found vda4 Jul 2 07:59:02.126243 extend-filesystems[1169]: Found vda6 Jul 2 07:59:02.126243 extend-filesystems[1169]: Found vda7 Jul 2 07:59:02.126243 extend-filesystems[1169]: Found vda9 Jul 2 07:59:02.126243 extend-filesystems[1169]: Checking size of /dev/vda9 Jul 2 07:59:02.163519 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:59:02.163743 systemd[1]: Finished motdgen.service. Jul 2 07:59:02.192641 extend-filesystems[1169]: Resized partition /dev/vda9 Jul 2 07:59:02.200841 extend-filesystems[1215]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:59:02.209486 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 2 07:59:02.214921 update_engine[1180]: I0702 07:59:02.214179 1180 main.cc:92] Flatcar Update Engine starting Jul 2 07:59:02.221195 systemd[1]: Started update-engine.service. Jul 2 07:59:02.224714 systemd[1]: Started locksmithd.service. Jul 2 07:59:02.226679 update_engine[1180]: I0702 07:59:02.226616 1180 update_check_scheduler.cc:74] Next update check in 10m17s Jul 2 07:59:02.326950 env[1191]: time="2024-07-02T07:59:02.326872363Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:59:02.345536 bash[1220]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:59:02.346210 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:59:02.356661 systemd-logind[1177]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:59:02.356702 systemd-logind[1177]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:59:02.358991 systemd-logind[1177]: New seat seat0. Jul 2 07:59:02.365098 systemd[1]: Started systemd-logind.service. Jul 2 07:59:02.378366 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 07:59:02.422542 extend-filesystems[1215]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:59:02.422542 extend-filesystems[1215]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 07:59:02.422542 extend-filesystems[1215]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 07:59:02.432508 extend-filesystems[1169]: Resized filesystem in /dev/vda9 Jul 2 07:59:02.432508 extend-filesystems[1169]: Found vdb Jul 2 07:59:02.424573 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:59:02.424871 systemd[1]: Finished extend-filesystems.service. Jul 2 07:59:02.463440 coreos-metadata[1164]: Jul 02 07:59:02.462 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 07:59:02.480667 env[1191]: time="2024-07-02T07:59:02.480509937Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:59:02.480882 env[1191]: time="2024-07-02T07:59:02.480798314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:59:02.485799 env[1191]: time="2024-07-02T07:59:02.485718383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:59:02.486214 env[1191]: time="2024-07-02T07:59:02.486184672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:59:02.486431 coreos-metadata[1164]: Jul 02 07:59:02.486 INFO Fetch successful Jul 2 07:59:02.486873 env[1191]: time="2024-07-02T07:59:02.486833004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:59:02.487086 env[1191]: time="2024-07-02T07:59:02.487066135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:59:02.487179 env[1191]: time="2024-07-02T07:59:02.487164774Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:59:02.487234 env[1191]: time="2024-07-02T07:59:02.487222870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:59:02.487482 env[1191]: time="2024-07-02T07:59:02.487449405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:59:02.488225 env[1191]: time="2024-07-02T07:59:02.488198772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:59:02.489442 env[1191]: time="2024-07-02T07:59:02.489410256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:59:02.489797 env[1191]: time="2024-07-02T07:59:02.489764082Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:59:02.490076 env[1191]: time="2024-07-02T07:59:02.490051853Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:59:02.490568 env[1191]: time="2024-07-02T07:59:02.490544428Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:59:02.496717 unknown[1164]: wrote ssh authorized keys file for user: core Jul 2 07:59:02.504565 env[1191]: time="2024-07-02T07:59:02.504518569Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:59:02.504780 env[1191]: time="2024-07-02T07:59:02.504763924Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:59:02.504843 env[1191]: time="2024-07-02T07:59:02.504830892Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:59:02.504948 env[1191]: time="2024-07-02T07:59:02.504934661Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505074 env[1191]: time="2024-07-02T07:59:02.505061547Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505139 env[1191]: time="2024-07-02T07:59:02.505127500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505386 env[1191]: time="2024-07-02T07:59:02.505371382Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505460 env[1191]: time="2024-07-02T07:59:02.505446899Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505531 env[1191]: time="2024-07-02T07:59:02.505518702Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505590 env[1191]: time="2024-07-02T07:59:02.505578568Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505649 env[1191]: time="2024-07-02T07:59:02.505637370Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.505730 env[1191]: time="2024-07-02T07:59:02.505710924Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:59:02.507670 env[1191]: time="2024-07-02T07:59:02.507626829Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:59:02.508059 env[1191]: time="2024-07-02T07:59:02.508028896Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:59:02.508649 env[1191]: time="2024-07-02T07:59:02.508621022Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:59:02.508793 env[1191]: time="2024-07-02T07:59:02.508770758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.508922 env[1191]: time="2024-07-02T07:59:02.508899861Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:59:02.509099 env[1191]: time="2024-07-02T07:59:02.509077242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510139 env[1191]: time="2024-07-02T07:59:02.510098299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510309 env[1191]: time="2024-07-02T07:59:02.510289562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510437 env[1191]: time="2024-07-02T07:59:02.510418686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510508 env[1191]: time="2024-07-02T07:59:02.510494920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510570 env[1191]: time="2024-07-02T07:59:02.510558423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510627 env[1191]: time="2024-07-02T07:59:02.510615910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510705 env[1191]: time="2024-07-02T07:59:02.510688094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.510875 env[1191]: time="2024-07-02T07:59:02.510850965Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:59:02.511249 env[1191]: time="2024-07-02T07:59:02.511219585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.512452 env[1191]: time="2024-07-02T07:59:02.512420674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.512578 env[1191]: time="2024-07-02T07:59:02.512556858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.512693 env[1191]: time="2024-07-02T07:59:02.512672730Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:59:02.512794 env[1191]: time="2024-07-02T07:59:02.512770391Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:59:02.512881 env[1191]: time="2024-07-02T07:59:02.512862498Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:59:02.512980 env[1191]: time="2024-07-02T07:59:02.512963037Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:59:02.513101 env[1191]: time="2024-07-02T07:59:02.513086821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:59:02.513495 env[1191]: time="2024-07-02T07:59:02.513405391Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:59:02.517746 env[1191]: time="2024-07-02T07:59:02.513728066Z" level=info msg="Connect containerd service" Jul 2 07:59:02.517746 env[1191]: time="2024-07-02T07:59:02.513790604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:59:02.520383 env[1191]: time="2024-07-02T07:59:02.520292234Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:59:02.521189 env[1191]: time="2024-07-02T07:59:02.521126549Z" level=info msg="Start subscribing containerd event" Jul 2 07:59:02.521403 env[1191]: time="2024-07-02T07:59:02.521377564Z" level=info msg="Start recovering state" Jul 2 07:59:02.521588 env[1191]: time="2024-07-02T07:59:02.521571989Z" level=info msg="Start event monitor" Jul 2 07:59:02.523299 env[1191]: time="2024-07-02T07:59:02.523269705Z" level=info msg="Start snapshots syncer" Jul 2 07:59:02.523437 env[1191]: time="2024-07-02T07:59:02.523422556Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:59:02.523499 env[1191]: time="2024-07-02T07:59:02.523487178Z" level=info msg="Start streaming server" Jul 2 07:59:02.523646 env[1191]: time="2024-07-02T07:59:02.523225412Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:59:02.523974 update-ssh-keys[1226]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:59:02.524473 env[1191]: time="2024-07-02T07:59:02.524447999Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:59:02.524521 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 07:59:02.524749 env[1191]: time="2024-07-02T07:59:02.524732164Z" level=info msg="containerd successfully booted in 0.213124s" Jul 2 07:59:02.525755 systemd[1]: Started containerd.service. Jul 2 07:59:03.238944 tar[1185]: linux-amd64/LICENSE Jul 2 07:59:03.241836 tar[1185]: linux-amd64/README.md Jul 2 07:59:03.254522 systemd[1]: Finished prepare-helm.service. Jul 2 07:59:03.317119 locksmithd[1216]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:59:03.837730 systemd[1]: Started kubelet.service. Jul 2 07:59:04.001788 sshd_keygen[1195]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:59:04.044780 systemd[1]: Finished sshd-keygen.service. Jul 2 07:59:04.047278 systemd[1]: Starting issuegen.service... Jul 2 07:59:04.055600 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:59:04.055901 systemd[1]: Finished issuegen.service. Jul 2 07:59:04.058735 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:59:04.071836 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:59:04.074754 systemd[1]: Started getty@tty1.service. Jul 2 07:59:04.077447 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:59:04.078235 systemd[1]: Reached target getty.target. Jul 2 07:59:04.079133 systemd[1]: Reached target multi-user.target. Jul 2 07:59:04.081700 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:59:04.095166 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:59:04.095487 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:59:04.096521 systemd[1]: Startup finished in 983ms (kernel) + 5.760s (initrd) + 8.518s (userspace) = 15.263s. Jul 2 07:59:04.846497 kubelet[1236]: E0702 07:59:04.846395 1236 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:59:04.849404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:59:04.849618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:59:04.849998 systemd[1]: kubelet.service: Consumed 1.483s CPU time. Jul 2 07:59:07.626315 systemd[1]: Created slice system-sshd.slice. Jul 2 07:59:07.628224 systemd[1]: Started sshd@0-146.190.152.6:22-147.75.109.163:39606.service. Jul 2 07:59:07.701648 sshd[1258]: Accepted publickey for core from 147.75.109.163 port 39606 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 07:59:07.705309 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:59:07.719208 systemd[1]: Created slice user-500.slice. Jul 2 07:59:07.720910 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:59:07.728487 systemd-logind[1177]: New session 1 of user core. Jul 2 07:59:07.736734 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:59:07.739110 systemd[1]: Starting user@500.service... Jul 2 07:59:07.746315 (systemd)[1261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:59:07.848462 systemd[1261]: Queued start job for default target default.target. Jul 2 07:59:07.849921 systemd[1261]: Reached target paths.target. Jul 2 07:59:07.850149 systemd[1261]: Reached target sockets.target. Jul 2 07:59:07.850269 systemd[1261]: Reached target timers.target. Jul 2 07:59:07.850378 systemd[1261]: Reached target basic.target. Jul 2 07:59:07.850513 systemd[1261]: Reached target default.target. Jul 2 07:59:07.850638 systemd[1261]: Startup finished in 93ms. Jul 2 07:59:07.850765 systemd[1]: Started user@500.service. Jul 2 07:59:07.852539 systemd[1]: Started session-1.scope. Jul 2 07:59:07.926529 systemd[1]: Started sshd@1-146.190.152.6:22-147.75.109.163:39614.service. Jul 2 07:59:07.979962 sshd[1270]: Accepted publickey for core from 147.75.109.163 port 39614 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 07:59:07.982097 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:59:07.989179 systemd-logind[1177]: New session 2 of user core. Jul 2 07:59:07.990302 systemd[1]: Started session-2.scope. Jul 2 07:59:08.064630 sshd[1270]: pam_unix(sshd:session): session closed for user core Jul 2 07:59:08.071814 systemd[1]: Started sshd@2-146.190.152.6:22-147.75.109.163:39626.service. Jul 2 07:59:08.073608 systemd[1]: sshd@1-146.190.152.6:22-147.75.109.163:39614.service: Deactivated successfully. Jul 2 07:59:08.074790 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:59:08.077284 systemd-logind[1177]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:59:08.079363 systemd-logind[1177]: Removed session 2. Jul 2 07:59:08.115556 sshd[1275]: Accepted publickey for core from 147.75.109.163 port 39626 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 07:59:08.117910 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:59:08.124742 systemd[1]: Started session-3.scope. Jul 2 07:59:08.126281 systemd-logind[1177]: New session 3 of user core. Jul 2 07:59:08.186754 sshd[1275]: pam_unix(sshd:session): session closed for user core Jul 2 07:59:08.192972 systemd[1]: sshd@2-146.190.152.6:22-147.75.109.163:39626.service: Deactivated successfully. Jul 2 07:59:08.193960 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:59:08.195319 systemd-logind[1177]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:59:08.197379 systemd[1]: Started sshd@3-146.190.152.6:22-147.75.109.163:39628.service. Jul 2 07:59:08.198600 systemd-logind[1177]: Removed session 3. Jul 2 07:59:08.246008 sshd[1282]: Accepted publickey for core from 147.75.109.163 port 39628 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 07:59:08.248074 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:59:08.256485 systemd-logind[1177]: New session 4 of user core. Jul 2 07:59:08.258365 systemd[1]: Started session-4.scope. Jul 2 07:59:08.332205 sshd[1282]: pam_unix(sshd:session): session closed for user core Jul 2 07:59:08.339016 systemd[1]: sshd@3-146.190.152.6:22-147.75.109.163:39628.service: Deactivated successfully. Jul 2 07:59:08.340151 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:59:08.341222 systemd-logind[1177]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:59:08.343263 systemd[1]: Started sshd@4-146.190.152.6:22-147.75.109.163:39640.service. Jul 2 07:59:08.345248 systemd-logind[1177]: Removed session 4. Jul 2 07:59:08.394771 sshd[1288]: Accepted publickey for core from 147.75.109.163 port 39640 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 07:59:08.396764 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:59:08.403290 systemd-logind[1177]: New session 5 of user core. Jul 2 07:59:08.403739 systemd[1]: Started session-5.scope. Jul 2 07:59:08.481733 sudo[1291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:59:08.482911 sudo[1291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:59:08.535211 systemd[1]: Starting docker.service... Jul 2 07:59:08.597705 env[1301]: time="2024-07-02T07:59:08.597613231Z" level=info msg="Starting up" Jul 2 07:59:08.602522 env[1301]: time="2024-07-02T07:59:08.602468667Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:59:08.603051 env[1301]: time="2024-07-02T07:59:08.603017973Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:59:08.603233 env[1301]: time="2024-07-02T07:59:08.603204646Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:59:08.603537 env[1301]: time="2024-07-02T07:59:08.603311074Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:59:08.606297 env[1301]: time="2024-07-02T07:59:08.606233591Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:59:08.606297 env[1301]: time="2024-07-02T07:59:08.606271341Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:59:08.606297 env[1301]: time="2024-07-02T07:59:08.606292561Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:59:08.606297 env[1301]: time="2024-07-02T07:59:08.606303244Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:59:08.614684 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport372301504-merged.mount: Deactivated successfully. Jul 2 07:59:08.686201 env[1301]: time="2024-07-02T07:59:08.686149201Z" level=info msg="Loading containers: start." Jul 2 07:59:08.914377 kernel: Initializing XFRM netlink socket Jul 2 07:59:08.958871 env[1301]: time="2024-07-02T07:59:08.958822378Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:59:09.046095 systemd-networkd[998]: docker0: Link UP Jul 2 07:59:09.066508 env[1301]: time="2024-07-02T07:59:09.066458660Z" level=info msg="Loading containers: done." Jul 2 07:59:09.085684 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3390973087-merged.mount: Deactivated successfully. Jul 2 07:59:09.094834 env[1301]: time="2024-07-02T07:59:09.094764781Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:59:09.095314 env[1301]: time="2024-07-02T07:59:09.095115357Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:59:09.095413 env[1301]: time="2024-07-02T07:59:09.095365632Z" level=info msg="Daemon has completed initialization" Jul 2 07:59:09.123836 systemd[1]: Started docker.service. Jul 2 07:59:09.134978 env[1301]: time="2024-07-02T07:59:09.134888793Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:59:09.162887 systemd[1]: Starting coreos-metadata.service... Jul 2 07:59:09.216540 coreos-metadata[1418]: Jul 02 07:59:09.216 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 07:59:09.228951 coreos-metadata[1418]: Jul 02 07:59:09.228 INFO Fetch successful Jul 2 07:59:09.243830 systemd[1]: Finished coreos-metadata.service. Jul 2 07:59:10.352565 env[1191]: time="2024-07-02T07:59:10.352488432Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 07:59:11.236861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291399014.mount: Deactivated successfully. Jul 2 07:59:13.656867 env[1191]: time="2024-07-02T07:59:13.656790492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:13.659718 env[1191]: time="2024-07-02T07:59:13.659669280Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:13.663006 env[1191]: time="2024-07-02T07:59:13.662945348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:13.665990 env[1191]: time="2024-07-02T07:59:13.665927789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:13.667180 env[1191]: time="2024-07-02T07:59:13.667119755Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 07:59:13.689236 env[1191]: time="2024-07-02T07:59:13.689161659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 07:59:15.101369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:59:15.101867 systemd[1]: Stopped kubelet.service. Jul 2 07:59:15.101981 systemd[1]: kubelet.service: Consumed 1.483s CPU time. Jul 2 07:59:15.106858 systemd[1]: Starting kubelet.service... Jul 2 07:59:15.320420 systemd[1]: Started kubelet.service. Jul 2 07:59:15.496481 kubelet[1451]: E0702 07:59:15.496169 1451 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:59:15.503360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:59:15.503634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:59:16.302409 systemd[1]: Started sshd@5-146.190.152.6:22-87.251.88.6:47198.service. Jul 2 07:59:17.033211 env[1191]: time="2024-07-02T07:59:17.033113936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:17.036763 env[1191]: time="2024-07-02T07:59:17.036703506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:17.044111 env[1191]: time="2024-07-02T07:59:17.044056806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:17.047957 env[1191]: time="2024-07-02T07:59:17.047819624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:17.049341 env[1191]: time="2024-07-02T07:59:17.049231283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 07:59:17.077408 env[1191]: time="2024-07-02T07:59:17.077277624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 07:59:17.270584 sshd[1458]: Invalid user ubuntu from 87.251.88.6 port 47198 Jul 2 07:59:17.274270 sshd[1458]: pam_faillock(sshd:auth): User unknown Jul 2 07:59:17.275434 sshd[1458]: pam_unix(sshd:auth): check pass; user unknown Jul 2 07:59:17.275543 sshd[1458]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=87.251.88.6 Jul 2 07:59:17.276318 sshd[1458]: pam_faillock(sshd:auth): User unknown Jul 2 07:59:18.873544 sshd[1458]: Failed password for invalid user ubuntu from 87.251.88.6 port 47198 ssh2 Jul 2 07:59:18.932001 env[1191]: time="2024-07-02T07:59:18.931919205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:18.936160 env[1191]: time="2024-07-02T07:59:18.936085980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:18.940223 env[1191]: time="2024-07-02T07:59:18.940111613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:18.943720 env[1191]: time="2024-07-02T07:59:18.943663256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:18.944865 env[1191]: time="2024-07-02T07:59:18.944816953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 07:59:18.965932 env[1191]: time="2024-07-02T07:59:18.965857930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 07:59:19.741828 sshd[1458]: Received disconnect from 87.251.88.6 port 47198:11: Bye Bye [preauth] Jul 2 07:59:19.741828 sshd[1458]: Disconnected from invalid user ubuntu 87.251.88.6 port 47198 [preauth] Jul 2 07:59:19.743763 systemd[1]: sshd@5-146.190.152.6:22-87.251.88.6:47198.service: Deactivated successfully. Jul 2 07:59:20.461125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547141452.mount: Deactivated successfully. Jul 2 07:59:21.301653 env[1191]: time="2024-07-02T07:59:21.301594276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:21.303607 env[1191]: time="2024-07-02T07:59:21.303560061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:21.304812 env[1191]: time="2024-07-02T07:59:21.304779582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:21.305944 env[1191]: time="2024-07-02T07:59:21.305915980Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:21.306265 env[1191]: time="2024-07-02T07:59:21.306232385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 07:59:21.321858 env[1191]: time="2024-07-02T07:59:21.321804377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:59:21.988224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195871707.mount: Deactivated successfully. Jul 2 07:59:23.333021 env[1191]: time="2024-07-02T07:59:23.332949840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:23.337397 env[1191]: time="2024-07-02T07:59:23.337340982Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:23.341390 env[1191]: time="2024-07-02T07:59:23.341307921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:23.344871 env[1191]: time="2024-07-02T07:59:23.344806897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:23.345874 env[1191]: time="2024-07-02T07:59:23.345832487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:59:23.362511 env[1191]: time="2024-07-02T07:59:23.362418480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:59:24.055343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2310945251.mount: Deactivated successfully. Jul 2 07:59:24.066868 env[1191]: time="2024-07-02T07:59:24.066718723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:24.070463 env[1191]: time="2024-07-02T07:59:24.070397453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:24.073464 env[1191]: time="2024-07-02T07:59:24.073360869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:24.075437 env[1191]: time="2024-07-02T07:59:24.075381379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:24.076051 env[1191]: time="2024-07-02T07:59:24.076004434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:59:24.092347 env[1191]: time="2024-07-02T07:59:24.092259412Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:59:24.761397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477343470.mount: Deactivated successfully. Jul 2 07:59:25.755156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:59:25.755681 systemd[1]: Stopped kubelet.service. Jul 2 07:59:25.760241 systemd[1]: Starting kubelet.service... Jul 2 07:59:25.945895 systemd[1]: Started kubelet.service. Jul 2 07:59:26.050110 kubelet[1493]: E0702 07:59:26.049759 1493 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:59:26.052511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:59:26.052659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:59:27.587811 env[1191]: time="2024-07-02T07:59:27.587724950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:27.591392 env[1191]: time="2024-07-02T07:59:27.591251523Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:27.596195 env[1191]: time="2024-07-02T07:59:27.596132669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:27.600694 env[1191]: time="2024-07-02T07:59:27.600616784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:27.601991 env[1191]: time="2024-07-02T07:59:27.601766576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:59:31.959470 systemd[1]: Stopped kubelet.service. Jul 2 07:59:31.962118 systemd[1]: Starting kubelet.service... Jul 2 07:59:32.006175 systemd[1]: Reloading. Jul 2 07:59:32.132856 /usr/lib/systemd/system-generators/torcx-generator[1586]: time="2024-07-02T07:59:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:59:32.132889 /usr/lib/systemd/system-generators/torcx-generator[1586]: time="2024-07-02T07:59:32Z" level=info msg="torcx already run" Jul 2 07:59:32.307618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:59:32.308122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:59:32.341819 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:59:32.482283 systemd[1]: Started kubelet.service. Jul 2 07:59:32.485965 systemd[1]: Stopping kubelet.service... Jul 2 07:59:32.486868 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:59:32.487608 systemd[1]: Stopped kubelet.service. Jul 2 07:59:32.491168 systemd[1]: Starting kubelet.service... Jul 2 07:59:32.614494 systemd[1]: Started kubelet.service. Jul 2 07:59:32.692202 kubelet[1640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:59:32.692202 kubelet[1640]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:59:32.692202 kubelet[1640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:59:32.692811 kubelet[1640]: I0702 07:59:32.692263 1640 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:59:33.169401 kubelet[1640]: I0702 07:59:33.169344 1640 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:59:33.169401 kubelet[1640]: I0702 07:59:33.169383 1640 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:59:33.169621 kubelet[1640]: I0702 07:59:33.169609 1640 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:59:33.215228 kubelet[1640]: E0702 07:59:33.215191 1640 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.152.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.219030 kubelet[1640]: I0702 07:59:33.218974 1640 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:59:33.242857 kubelet[1640]: I0702 07:59:33.242806 1640 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:59:33.244064 kubelet[1640]: I0702 07:59:33.244013 1640 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:59:33.244335 kubelet[1640]: I0702 07:59:33.244298 1640 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:59:33.244522 kubelet[1640]: I0702 07:59:33.244366 1640 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:59:33.244522 kubelet[1640]: I0702 07:59:33.244384 1640 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:59:33.244624 kubelet[1640]: I0702 07:59:33.244547 1640 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:59:33.244757 kubelet[1640]: I0702 07:59:33.244737 1640 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:59:33.244849 kubelet[1640]: I0702 07:59:33.244767 1640 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:59:33.244943 kubelet[1640]: I0702 07:59:33.244814 1640 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:59:33.245006 kubelet[1640]: I0702 07:59:33.244961 1640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:59:33.251340 kubelet[1640]: W0702 07:59:33.251246 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://146.190.152.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-2-fce33301fd&limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.251584 kubelet[1640]: E0702 07:59:33.251384 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.152.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-2-fce33301fd&limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.251584 kubelet[1640]: W0702 07:59:33.251481 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://146.190.152.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.251688 kubelet[1640]: E0702 07:59:33.251600 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.152.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.252718 kubelet[1640]: I0702 07:59:33.252316 1640 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:59:33.262290 kubelet[1640]: I0702 07:59:33.262221 1640 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:59:33.264192 kubelet[1640]: W0702 07:59:33.263979 1640 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:59:33.265510 kubelet[1640]: I0702 07:59:33.265280 1640 server.go:1256] "Started kubelet" Jul 2 07:59:33.275806 kubelet[1640]: I0702 07:59:33.275758 1640 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:59:33.276760 kubelet[1640]: I0702 07:59:33.276717 1640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:59:33.277248 kubelet[1640]: I0702 07:59:33.277218 1640 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:59:33.277703 kubelet[1640]: I0702 07:59:33.277679 1640 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:59:33.279184 kubelet[1640]: E0702 07:59:33.279135 1640 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.152.6:6443/api/v1/namespaces/default/events\": dial tcp 146.190.152.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-2-fce33301fd.17de56762d1f0829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-2-fce33301fd,UID:ci-3510.3.5-2-fce33301fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-2-fce33301fd,},FirstTimestamp:2024-07-02 07:59:33.265246249 +0000 UTC m=+0.642677111,LastTimestamp:2024-07-02 07:59:33.265246249 +0000 UTC m=+0.642677111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-2-fce33301fd,}" Jul 2 07:59:33.282929 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:59:33.284346 kubelet[1640]: E0702 07:59:33.284018 1640 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:59:33.284755 kubelet[1640]: I0702 07:59:33.284715 1640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:59:33.285463 kubelet[1640]: I0702 07:59:33.285443 1640 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:59:33.287220 kubelet[1640]: I0702 07:59:33.287155 1640 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:59:33.288037 kubelet[1640]: I0702 07:59:33.287985 1640 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:59:33.288590 kubelet[1640]: W0702 07:59:33.288529 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.152.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.288696 kubelet[1640]: E0702 07:59:33.288605 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.152.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.290537 kubelet[1640]: E0702 07:59:33.290499 1640 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-2-fce33301fd\" not found" Jul 2 07:59:33.290876 kubelet[1640]: E0702 07:59:33.290853 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.152.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-2-fce33301fd?timeout=10s\": dial tcp 146.190.152.6:6443: connect: connection refused" interval="200ms" Jul 2 07:59:33.291647 kubelet[1640]: I0702 07:59:33.291619 1640 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:59:33.291748 kubelet[1640]: I0702 07:59:33.291708 1640 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:59:33.293050 kubelet[1640]: I0702 07:59:33.293030 1640 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:59:33.315763 kubelet[1640]: I0702 07:59:33.315724 1640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:59:33.317489 kubelet[1640]: I0702 07:59:33.317453 1640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:59:33.317730 kubelet[1640]: I0702 07:59:33.317710 1640 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:59:33.317860 kubelet[1640]: I0702 07:59:33.317832 1640 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:59:33.318092 kubelet[1640]: E0702 07:59:33.318076 1640 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:59:33.326572 kubelet[1640]: W0702 07:59:33.326518 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://146.190.152.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.328014 kubelet[1640]: E0702 07:59:33.327981 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.152.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:33.330458 kubelet[1640]: I0702 07:59:33.330430 1640 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:59:33.333534 kubelet[1640]: I0702 07:59:33.333502 1640 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:59:33.333915 kubelet[1640]: I0702 07:59:33.333886 1640 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:59:33.337410 kubelet[1640]: I0702 07:59:33.337365 1640 policy_none.go:49] "None policy: Start" Jul 2 07:59:33.338288 kubelet[1640]: I0702 07:59:33.338261 1640 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:59:33.338500 kubelet[1640]: I0702 07:59:33.338482 1640 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:59:33.352711 systemd[1]: Created slice kubepods.slice. Jul 2 07:59:33.360436 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:59:33.365065 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:59:33.377712 kubelet[1640]: I0702 07:59:33.377669 1640 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:59:33.378014 kubelet[1640]: I0702 07:59:33.377980 1640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:59:33.381857 kubelet[1640]: E0702 07:59:33.381829 1640 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-2-fce33301fd\" not found" Jul 2 07:59:33.391787 kubelet[1640]: I0702 07:59:33.391753 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.392501 kubelet[1640]: E0702 07:59:33.392468 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.152.6:6443/api/v1/nodes\": dial tcp 146.190.152.6:6443: connect: connection refused" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.418776 kubelet[1640]: I0702 07:59:33.418723 1640 topology_manager.go:215] "Topology Admit Handler" podUID="6d8128898ae531237c73392ad0dfcebf" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.422754 kubelet[1640]: I0702 07:59:33.420327 1640 topology_manager.go:215] "Topology Admit Handler" podUID="28a22e8e72aea211fab031a856b3372e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.423335 kubelet[1640]: I0702 07:59:33.423298 1640 topology_manager.go:215] "Topology Admit Handler" podUID="7f3304fc673ebe961bb6e9ee097c608f" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.431360 systemd[1]: Created slice kubepods-burstable-pod6d8128898ae531237c73392ad0dfcebf.slice. Jul 2 07:59:33.445163 systemd[1]: Created slice kubepods-burstable-pod7f3304fc673ebe961bb6e9ee097c608f.slice. Jul 2 07:59:33.451481 systemd[1]: Created slice kubepods-burstable-pod28a22e8e72aea211fab031a856b3372e.slice. Jul 2 07:59:33.492205 kubelet[1640]: E0702 07:59:33.492161 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.152.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-2-fce33301fd?timeout=10s\": dial tcp 146.190.152.6:6443: connect: connection refused" interval="400ms" Jul 2 07:59:33.589670 kubelet[1640]: I0702 07:59:33.589603 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589670 kubelet[1640]: I0702 07:59:33.589666 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589670 kubelet[1640]: I0702 07:59:33.589690 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589960 kubelet[1640]: I0702 07:59:33.589729 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589960 kubelet[1640]: I0702 07:59:33.589767 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f3304fc673ebe961bb6e9ee097c608f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-2-fce33301fd\" (UID: \"7f3304fc673ebe961bb6e9ee097c608f\") " pod="kube-system/kube-scheduler-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589960 kubelet[1640]: I0702 07:59:33.589804 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d8128898ae531237c73392ad0dfcebf-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" (UID: \"6d8128898ae531237c73392ad0dfcebf\") " pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589960 kubelet[1640]: I0702 07:59:33.589827 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d8128898ae531237c73392ad0dfcebf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" (UID: \"6d8128898ae531237c73392ad0dfcebf\") " pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.589960 kubelet[1640]: I0702 07:59:33.589846 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d8128898ae531237c73392ad0dfcebf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" (UID: \"6d8128898ae531237c73392ad0dfcebf\") " pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.590117 kubelet[1640]: I0702 07:59:33.589869 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.593987 kubelet[1640]: I0702 07:59:33.593952 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.594637 kubelet[1640]: E0702 07:59:33.594617 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.152.6:6443/api/v1/nodes\": dial tcp 146.190.152.6:6443: connect: connection refused" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.744371 kubelet[1640]: E0702 07:59:33.743497 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:33.744825 env[1191]: time="2024-07-02T07:59:33.744711203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-2-fce33301fd,Uid:6d8128898ae531237c73392ad0dfcebf,Namespace:kube-system,Attempt:0,}" Jul 2 07:59:33.753642 kubelet[1640]: E0702 07:59:33.753611 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:33.754200 kubelet[1640]: E0702 07:59:33.754178 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:33.755094 env[1191]: time="2024-07-02T07:59:33.754799435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-2-fce33301fd,Uid:28a22e8e72aea211fab031a856b3372e,Namespace:kube-system,Attempt:0,}" Jul 2 07:59:33.755094 env[1191]: time="2024-07-02T07:59:33.754824459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-2-fce33301fd,Uid:7f3304fc673ebe961bb6e9ee097c608f,Namespace:kube-system,Attempt:0,}" Jul 2 07:59:33.813490 kubelet[1640]: E0702 07:59:33.813443 1640 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.152.6:6443/api/v1/namespaces/default/events\": dial tcp 146.190.152.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-2-fce33301fd.17de56762d1f0829 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-2-fce33301fd,UID:ci-3510.3.5-2-fce33301fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-2-fce33301fd,},FirstTimestamp:2024-07-02 07:59:33.265246249 +0000 UTC m=+0.642677111,LastTimestamp:2024-07-02 07:59:33.265246249 +0000 UTC m=+0.642677111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-2-fce33301fd,}" Jul 2 07:59:33.893015 kubelet[1640]: E0702 07:59:33.892969 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.152.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-2-fce33301fd?timeout=10s\": dial tcp 146.190.152.6:6443: connect: connection refused" interval="800ms" Jul 2 07:59:33.996908 kubelet[1640]: I0702 07:59:33.996295 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:33.996908 kubelet[1640]: E0702 07:59:33.996797 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.152.6:6443/api/v1/nodes\": dial tcp 146.190.152.6:6443: connect: connection refused" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:34.291259 kubelet[1640]: W0702 07:59:34.290910 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://146.190.152.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-2-fce33301fd&limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.291259 kubelet[1640]: E0702 07:59:34.290973 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.152.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-2-fce33301fd&limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.368209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576219655.mount: Deactivated successfully. Jul 2 07:59:34.375448 env[1191]: time="2024-07-02T07:59:34.375397932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.382368 env[1191]: time="2024-07-02T07:59:34.382299334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.384476 env[1191]: time="2024-07-02T07:59:34.384427671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.385420 env[1191]: time="2024-07-02T07:59:34.385379987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.387464 env[1191]: time="2024-07-02T07:59:34.387419952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.389350 env[1191]: time="2024-07-02T07:59:34.389293437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.390373 env[1191]: time="2024-07-02T07:59:34.390337505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.393671 env[1191]: time="2024-07-02T07:59:34.393624171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.400288 env[1191]: time="2024-07-02T07:59:34.400238714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.401525 env[1191]: time="2024-07-02T07:59:34.401488469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.402857 env[1191]: time="2024-07-02T07:59:34.402821416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.405469 env[1191]: time="2024-07-02T07:59:34.405431569Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:59:34.427906 kubelet[1640]: W0702 07:59:34.427828 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://146.190.152.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.427906 kubelet[1640]: E0702 07:59:34.427891 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.152.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.451161 env[1191]: time="2024-07-02T07:59:34.451014313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:59:34.451499 env[1191]: time="2024-07-02T07:59:34.451442128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:59:34.451674 env[1191]: time="2024-07-02T07:59:34.451490015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:59:34.452068 env[1191]: time="2024-07-02T07:59:34.452013841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adb163160535978f6c96b9fc7a0120a5f5a4b4b7119ada7a015f16801739e468 pid=1678 runtime=io.containerd.runc.v2 Jul 2 07:59:34.459835 env[1191]: time="2024-07-02T07:59:34.459673358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:59:34.459835 env[1191]: time="2024-07-02T07:59:34.459754481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:59:34.459835 env[1191]: time="2024-07-02T07:59:34.459771352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:59:34.462715 env[1191]: time="2024-07-02T07:59:34.462448462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6490b35933373edc2921bc2a4caede5d1f7aed3736ec51e7f3a2bc615f483583 pid=1696 runtime=io.containerd.runc.v2 Jul 2 07:59:34.474780 env[1191]: time="2024-07-02T07:59:34.468548433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:59:34.474780 env[1191]: time="2024-07-02T07:59:34.468869815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:59:34.474780 env[1191]: time="2024-07-02T07:59:34.468923695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:59:34.476919 env[1191]: time="2024-07-02T07:59:34.475516075Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7337addfe299786a04c9eaaebe468a6724095ca8b9c7b76e7ec99f13dbb3c154 pid=1706 runtime=io.containerd.runc.v2 Jul 2 07:59:34.484098 systemd[1]: Started cri-containerd-adb163160535978f6c96b9fc7a0120a5f5a4b4b7119ada7a015f16801739e468.scope. Jul 2 07:59:34.532749 systemd[1]: Started cri-containerd-6490b35933373edc2921bc2a4caede5d1f7aed3736ec51e7f3a2bc615f483583.scope. Jul 2 07:59:34.546789 systemd[1]: Started cri-containerd-7337addfe299786a04c9eaaebe468a6724095ca8b9c7b76e7ec99f13dbb3c154.scope. Jul 2 07:59:34.606675 env[1191]: time="2024-07-02T07:59:34.606630240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-2-fce33301fd,Uid:6d8128898ae531237c73392ad0dfcebf,Namespace:kube-system,Attempt:0,} returns sandbox id \"adb163160535978f6c96b9fc7a0120a5f5a4b4b7119ada7a015f16801739e468\"" Jul 2 07:59:34.610162 kubelet[1640]: E0702 07:59:34.609866 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:34.615580 env[1191]: time="2024-07-02T07:59:34.615519736Z" level=info msg="CreateContainer within sandbox \"adb163160535978f6c96b9fc7a0120a5f5a4b4b7119ada7a015f16801739e468\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:59:34.635931 env[1191]: time="2024-07-02T07:59:34.635855182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-2-fce33301fd,Uid:28a22e8e72aea211fab031a856b3372e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7337addfe299786a04c9eaaebe468a6724095ca8b9c7b76e7ec99f13dbb3c154\"" Jul 2 07:59:34.639534 kubelet[1640]: E0702 07:59:34.639298 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:34.642839 env[1191]: time="2024-07-02T07:59:34.642691838Z" level=info msg="CreateContainer within sandbox \"7337addfe299786a04c9eaaebe468a6724095ca8b9c7b76e7ec99f13dbb3c154\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:59:34.656664 env[1191]: time="2024-07-02T07:59:34.656607567Z" level=info msg="CreateContainer within sandbox \"adb163160535978f6c96b9fc7a0120a5f5a4b4b7119ada7a015f16801739e468\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd3a8ebf105c9a39645e6cf3aa21358ca9dc5f5b1685620693ac822a3e5f6a8a\"" Jul 2 07:59:34.657458 env[1191]: time="2024-07-02T07:59:34.657419062Z" level=info msg="StartContainer for \"bd3a8ebf105c9a39645e6cf3aa21358ca9dc5f5b1685620693ac822a3e5f6a8a\"" Jul 2 07:59:34.665182 env[1191]: time="2024-07-02T07:59:34.665118089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-2-fce33301fd,Uid:7f3304fc673ebe961bb6e9ee097c608f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6490b35933373edc2921bc2a4caede5d1f7aed3736ec51e7f3a2bc615f483583\"" Jul 2 07:59:34.666942 kubelet[1640]: E0702 07:59:34.666905 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:34.669498 env[1191]: time="2024-07-02T07:59:34.669454048Z" level=info msg="CreateContainer within sandbox \"6490b35933373edc2921bc2a4caede5d1f7aed3736ec51e7f3a2bc615f483583\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:59:34.669801 env[1191]: time="2024-07-02T07:59:34.669773392Z" level=info msg="CreateContainer within sandbox \"7337addfe299786a04c9eaaebe468a6724095ca8b9c7b76e7ec99f13dbb3c154\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6867f0eb0387ddee3aeaddbc365f4eaabf2f1371db60de1fb8e0971e30b22aa\"" Jul 2 07:59:34.670426 env[1191]: time="2024-07-02T07:59:34.670401055Z" level=info msg="StartContainer for \"b6867f0eb0387ddee3aeaddbc365f4eaabf2f1371db60de1fb8e0971e30b22aa\"" Jul 2 07:59:34.694280 kubelet[1640]: E0702 07:59:34.694242 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.152.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-2-fce33301fd?timeout=10s\": dial tcp 146.190.152.6:6443: connect: connection refused" interval="1.6s" Jul 2 07:59:34.697185 env[1191]: time="2024-07-02T07:59:34.697113651Z" level=info msg="CreateContainer within sandbox \"6490b35933373edc2921bc2a4caede5d1f7aed3736ec51e7f3a2bc615f483583\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4821f33a9c052bc153f6fabb5c720f77fa87c10021be743fd5e7818e14a7f048\"" Jul 2 07:59:34.698003 env[1191]: time="2024-07-02T07:59:34.697958527Z" level=info msg="StartContainer for \"4821f33a9c052bc153f6fabb5c720f77fa87c10021be743fd5e7818e14a7f048\"" Jul 2 07:59:34.702103 systemd[1]: Started cri-containerd-bd3a8ebf105c9a39645e6cf3aa21358ca9dc5f5b1685620693ac822a3e5f6a8a.scope. Jul 2 07:59:34.724808 systemd[1]: Started cri-containerd-b6867f0eb0387ddee3aeaddbc365f4eaabf2f1371db60de1fb8e0971e30b22aa.scope. Jul 2 07:59:34.766093 systemd[1]: Started cri-containerd-4821f33a9c052bc153f6fabb5c720f77fa87c10021be743fd5e7818e14a7f048.scope. Jul 2 07:59:34.800151 kubelet[1640]: I0702 07:59:34.800015 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:34.800713 kubelet[1640]: E0702 07:59:34.800512 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.152.6:6443/api/v1/nodes\": dial tcp 146.190.152.6:6443: connect: connection refused" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:34.801861 kubelet[1640]: W0702 07:59:34.801809 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://146.190.152.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.801861 kubelet[1640]: E0702 07:59:34.801866 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.152.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.812526 env[1191]: time="2024-07-02T07:59:34.812451452Z" level=info msg="StartContainer for \"bd3a8ebf105c9a39645e6cf3aa21358ca9dc5f5b1685620693ac822a3e5f6a8a\" returns successfully" Jul 2 07:59:34.834475 env[1191]: time="2024-07-02T07:59:34.834405440Z" level=info msg="StartContainer for \"b6867f0eb0387ddee3aeaddbc365f4eaabf2f1371db60de1fb8e0971e30b22aa\" returns successfully" Jul 2 07:59:34.870050 env[1191]: time="2024-07-02T07:59:34.869973779Z" level=info msg="StartContainer for \"4821f33a9c052bc153f6fabb5c720f77fa87c10021be743fd5e7818e14a7f048\" returns successfully" Jul 2 07:59:34.870870 kubelet[1640]: W0702 07:59:34.870834 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://146.190.152.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:34.870870 kubelet[1640]: E0702 07:59:34.870876 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.152.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:35.324389 kubelet[1640]: E0702 07:59:35.324348 1640 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.152.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.152.6:6443: connect: connection refused Jul 2 07:59:35.335724 kubelet[1640]: E0702 07:59:35.335682 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:35.338036 kubelet[1640]: E0702 07:59:35.338007 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:35.341215 kubelet[1640]: E0702 07:59:35.341183 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:36.343933 kubelet[1640]: E0702 07:59:36.343884 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:36.345631 kubelet[1640]: E0702 07:59:36.345598 1640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:36.403071 kubelet[1640]: I0702 07:59:36.403022 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:37.856749 kubelet[1640]: E0702 07:59:37.856683 1640 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-2-fce33301fd\" not found" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:37.961384 kubelet[1640]: I0702 07:59:37.961301 1640 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:38.249506 kubelet[1640]: I0702 07:59:38.248893 1640 apiserver.go:52] "Watching apiserver" Jul 2 07:59:38.288378 kubelet[1640]: I0702 07:59:38.288260 1640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:59:41.341376 systemd[1]: Reloading. Jul 2 07:59:41.488500 /usr/lib/systemd/system-generators/torcx-generator[1922]: time="2024-07-02T07:59:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:59:41.488545 /usr/lib/systemd/system-generators/torcx-generator[1922]: time="2024-07-02T07:59:41Z" level=info msg="torcx already run" Jul 2 07:59:41.645193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:59:41.645234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:59:41.679785 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:59:41.918447 kubelet[1640]: I0702 07:59:41.916931 1640 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:59:41.921769 systemd[1]: Stopping kubelet.service... Jul 2 07:59:41.939200 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:59:41.939482 systemd[1]: Stopped kubelet.service. Jul 2 07:59:41.939576 systemd[1]: kubelet.service: Consumed 1.119s CPU time. Jul 2 07:59:41.959941 systemd[1]: Starting kubelet.service... Jul 2 07:59:43.351352 systemd[1]: Started kubelet.service. Jul 2 07:59:43.490109 kubelet[1974]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:59:43.490109 kubelet[1974]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:59:43.490109 kubelet[1974]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:59:43.490735 kubelet[1974]: I0702 07:59:43.490181 1974 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:59:43.504014 kubelet[1974]: I0702 07:59:43.503171 1974 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:59:43.504014 kubelet[1974]: I0702 07:59:43.503228 1974 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:59:43.504014 kubelet[1974]: I0702 07:59:43.503911 1974 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:59:43.507918 kubelet[1974]: I0702 07:59:43.507315 1974 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:59:43.515641 kubelet[1974]: I0702 07:59:43.515590 1974 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:59:43.517030 sudo[1986]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:59:43.518614 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:59:43.544376 kubelet[1974]: I0702 07:59:43.544308 1974 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:59:43.546763 kubelet[1974]: I0702 07:59:43.546718 1974 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:59:43.547456 kubelet[1974]: I0702 07:59:43.547405 1974 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:59:43.547785 kubelet[1974]: I0702 07:59:43.547764 1974 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:59:43.547948 kubelet[1974]: I0702 07:59:43.547933 1974 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:59:43.548162 kubelet[1974]: I0702 07:59:43.548146 1974 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:59:43.554245 kubelet[1974]: I0702 07:59:43.554193 1974 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:59:43.555460 kubelet[1974]: I0702 07:59:43.555428 1974 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:59:43.555758 kubelet[1974]: I0702 07:59:43.555725 1974 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:59:43.555887 kubelet[1974]: I0702 07:59:43.555875 1974 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:59:43.575430 kubelet[1974]: I0702 07:59:43.566974 1974 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:59:43.575430 kubelet[1974]: I0702 07:59:43.567313 1974 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:59:43.575430 kubelet[1974]: I0702 07:59:43.568224 1974 server.go:1256] "Started kubelet" Jul 2 07:59:43.575430 kubelet[1974]: I0702 07:59:43.571871 1974 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:59:43.576065 kubelet[1974]: I0702 07:59:43.576015 1974 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:59:43.579360 kubelet[1974]: I0702 07:59:43.577651 1974 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:59:43.581363 kubelet[1974]: I0702 07:59:43.580308 1974 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:59:43.581363 kubelet[1974]: I0702 07:59:43.580845 1974 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:59:43.586039 kubelet[1974]: I0702 07:59:43.584948 1974 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:59:43.587506 kubelet[1974]: I0702 07:59:43.587471 1974 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:59:43.587785 kubelet[1974]: I0702 07:59:43.587763 1974 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:59:43.644549 kubelet[1974]: E0702 07:59:43.643391 1974 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:59:43.644549 kubelet[1974]: I0702 07:59:43.643810 1974 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:59:43.644549 kubelet[1974]: I0702 07:59:43.643830 1974 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:59:43.644549 kubelet[1974]: I0702 07:59:43.643951 1974 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:59:43.693018 kubelet[1974]: I0702 07:59:43.692970 1974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:59:43.693294 kubelet[1974]: I0702 07:59:43.693203 1974 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.702974 kubelet[1974]: I0702 07:59:43.700991 1974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:59:43.702974 kubelet[1974]: I0702 07:59:43.701050 1974 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:59:43.702974 kubelet[1974]: I0702 07:59:43.701083 1974 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:59:43.702974 kubelet[1974]: E0702 07:59:43.701159 1974 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:59:43.724239 kubelet[1974]: I0702 07:59:43.721550 1974 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.724239 kubelet[1974]: I0702 07:59:43.721644 1974 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.777043 kubelet[1974]: I0702 07:59:43.776996 1974 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:59:43.777043 kubelet[1974]: I0702 07:59:43.777028 1974 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:59:43.777043 kubelet[1974]: I0702 07:59:43.777054 1974 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:59:43.777308 kubelet[1974]: I0702 07:59:43.777230 1974 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:59:43.777308 kubelet[1974]: I0702 07:59:43.777257 1974 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:59:43.777308 kubelet[1974]: I0702 07:59:43.777270 1974 policy_none.go:49] "None policy: Start" Jul 2 07:59:43.778607 kubelet[1974]: I0702 07:59:43.778552 1974 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:59:43.778607 kubelet[1974]: I0702 07:59:43.778601 1974 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:59:43.778870 kubelet[1974]: I0702 07:59:43.778793 1974 state_mem.go:75] "Updated machine memory state" Jul 2 07:59:43.786701 kubelet[1974]: I0702 07:59:43.786668 1974 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:59:43.789989 kubelet[1974]: I0702 07:59:43.789951 1974 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:59:43.801285 kubelet[1974]: I0702 07:59:43.801234 1974 topology_manager.go:215] "Topology Admit Handler" podUID="6d8128898ae531237c73392ad0dfcebf" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.801506 kubelet[1974]: I0702 07:59:43.801389 1974 topology_manager.go:215] "Topology Admit Handler" podUID="28a22e8e72aea211fab031a856b3372e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.801506 kubelet[1974]: I0702 07:59:43.801468 1974 topology_manager.go:215] "Topology Admit Handler" podUID="7f3304fc673ebe961bb6e9ee097c608f" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.822843 kubelet[1974]: W0702 07:59:43.822803 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:59:43.823093 kubelet[1974]: W0702 07:59:43.822909 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:59:43.823149 kubelet[1974]: W0702 07:59:43.823139 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:59:43.896570 kubelet[1974]: I0702 07:59:43.894880 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.897159 kubelet[1974]: I0702 07:59:43.897128 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.897591 kubelet[1974]: I0702 07:59:43.897568 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d8128898ae531237c73392ad0dfcebf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" (UID: \"6d8128898ae531237c73392ad0dfcebf\") " pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.897817 kubelet[1974]: I0702 07:59:43.897801 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d8128898ae531237c73392ad0dfcebf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" (UID: \"6d8128898ae531237c73392ad0dfcebf\") " pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.898259 kubelet[1974]: I0702 07:59:43.898241 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.898453 kubelet[1974]: I0702 07:59:43.898437 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.898620 kubelet[1974]: I0702 07:59:43.898604 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28a22e8e72aea211fab031a856b3372e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-2-fce33301fd\" (UID: \"28a22e8e72aea211fab031a856b3372e\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.898772 kubelet[1974]: I0702 07:59:43.898740 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f3304fc673ebe961bb6e9ee097c608f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-2-fce33301fd\" (UID: \"7f3304fc673ebe961bb6e9ee097c608f\") " pod="kube-system/kube-scheduler-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:43.898862 kubelet[1974]: I0702 07:59:43.898846 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d8128898ae531237c73392ad0dfcebf-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" (UID: \"6d8128898ae531237c73392ad0dfcebf\") " pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:44.124511 kubelet[1974]: E0702 07:59:44.124460 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:44.125189 kubelet[1974]: E0702 07:59:44.125087 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:44.127299 kubelet[1974]: E0702 07:59:44.125479 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:44.478583 sudo[1986]: pam_unix(sudo:session): session closed for user root Jul 2 07:59:44.562965 kubelet[1974]: I0702 07:59:44.562908 1974 apiserver.go:52] "Watching apiserver" Jul 2 07:59:44.588229 kubelet[1974]: I0702 07:59:44.588173 1974 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:59:44.740908 kubelet[1974]: E0702 07:59:44.740730 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:44.742254 kubelet[1974]: I0702 07:59:44.741670 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-2-fce33301fd" podStartSLOduration=1.741605871 podStartE2EDuration="1.741605871s" podCreationTimestamp="2024-07-02 07:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:59:44.72143852 +0000 UTC m=+1.342970538" watchObservedRunningTime="2024-07-02 07:59:44.741605871 +0000 UTC m=+1.363137871" Jul 2 07:59:44.742678 kubelet[1974]: E0702 07:59:44.742642 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:44.757439 kubelet[1974]: W0702 07:59:44.757395 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:59:44.757875 kubelet[1974]: E0702 07:59:44.757841 1974 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-2-fce33301fd\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" Jul 2 07:59:44.758811 kubelet[1974]: E0702 07:59:44.758777 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:44.771198 kubelet[1974]: I0702 07:59:44.771147 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-2-fce33301fd" podStartSLOduration=1.771092437 podStartE2EDuration="1.771092437s" podCreationTimestamp="2024-07-02 07:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:59:44.743485323 +0000 UTC m=+1.365017338" watchObservedRunningTime="2024-07-02 07:59:44.771092437 +0000 UTC m=+1.392624426" Jul 2 07:59:44.771738 kubelet[1974]: I0702 07:59:44.771714 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-2-fce33301fd" podStartSLOduration=1.771667036 podStartE2EDuration="1.771667036s" podCreationTimestamp="2024-07-02 07:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:59:44.770817194 +0000 UTC m=+1.392349192" watchObservedRunningTime="2024-07-02 07:59:44.771667036 +0000 UTC m=+1.393199035" Jul 2 07:59:45.744065 kubelet[1974]: E0702 07:59:45.743993 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:46.411017 sudo[1291]: pam_unix(sudo:session): session closed for user root Jul 2 07:59:46.417912 sshd[1288]: pam_unix(sshd:session): session closed for user core Jul 2 07:59:46.422767 systemd-logind[1177]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:59:46.424956 systemd[1]: sshd@4-146.190.152.6:22-147.75.109.163:39640.service: Deactivated successfully. Jul 2 07:59:46.425868 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:59:46.426047 systemd[1]: session-5.scope: Consumed 6.864s CPU time. Jul 2 07:59:46.427535 systemd-logind[1177]: Removed session 5. Jul 2 07:59:46.554415 kubelet[1974]: E0702 07:59:46.554365 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:46.746369 kubelet[1974]: E0702 07:59:46.746184 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:47.836002 update_engine[1180]: I0702 07:59:47.835489 1180 update_attempter.cc:509] Updating boot flags... Jul 2 07:59:53.322504 kubelet[1974]: E0702 07:59:53.322445 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:53.474234 kubelet[1974]: I0702 07:59:53.474183 1974 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:59:53.474802 env[1191]: time="2024-07-02T07:59:53.474759062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:59:53.475658 kubelet[1974]: I0702 07:59:53.475633 1974 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:59:53.758610 kubelet[1974]: E0702 07:59:53.758575 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.324295 kubelet[1974]: I0702 07:59:54.324247 1974 topology_manager.go:215] "Topology Admit Handler" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" podNamespace="kube-system" podName="cilium-vw4w8" Jul 2 07:59:54.332814 systemd[1]: Created slice kubepods-burstable-pod1e7b2ef7_74b0_4040_9baa_9d4faa04f29d.slice. Jul 2 07:59:54.340370 kubelet[1974]: I0702 07:59:54.340334 1974 topology_manager.go:215] "Topology Admit Handler" podUID="75ab21b2-1af6-43ce-8b63-08e352e3456d" podNamespace="kube-system" podName="kube-proxy-2c2ct" Jul 2 07:59:54.346171 systemd[1]: Created slice kubepods-besteffort-pod75ab21b2_1af6_43ce_8b63_08e352e3456d.slice. Jul 2 07:59:54.376897 kubelet[1974]: I0702 07:59:54.376851 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-run\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.377270 kubelet[1974]: I0702 07:59:54.377249 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-clustermesh-secrets\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.377501 kubelet[1974]: I0702 07:59:54.377487 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-config-path\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.377620 kubelet[1974]: I0702 07:59:54.377610 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75ab21b2-1af6-43ce-8b63-08e352e3456d-xtables-lock\") pod \"kube-proxy-2c2ct\" (UID: \"75ab21b2-1af6-43ce-8b63-08e352e3456d\") " pod="kube-system/kube-proxy-2c2ct" Jul 2 07:59:54.377831 kubelet[1974]: I0702 07:59:54.377819 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75ab21b2-1af6-43ce-8b63-08e352e3456d-lib-modules\") pod \"kube-proxy-2c2ct\" (UID: \"75ab21b2-1af6-43ce-8b63-08e352e3456d\") " pod="kube-system/kube-proxy-2c2ct" Jul 2 07:59:54.377936 kubelet[1974]: I0702 07:59:54.377927 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-bpf-maps\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378020 kubelet[1974]: I0702 07:59:54.378012 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cni-path\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378137 kubelet[1974]: I0702 07:59:54.378127 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-etc-cni-netd\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378249 kubelet[1974]: I0702 07:59:54.378238 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-net\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378408 kubelet[1974]: I0702 07:59:54.378395 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwhs\" (UniqueName: \"kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-kube-api-access-qgwhs\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378534 kubelet[1974]: I0702 07:59:54.378523 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hostproc\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378630 kubelet[1974]: I0702 07:59:54.378620 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-lib-modules\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378719 kubelet[1974]: I0702 07:59:54.378710 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-kernel\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378815 kubelet[1974]: I0702 07:59:54.378805 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hubble-tls\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.378968 kubelet[1974]: I0702 07:59:54.378934 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz49z\" (UniqueName: \"kubernetes.io/projected/75ab21b2-1af6-43ce-8b63-08e352e3456d-kube-api-access-wz49z\") pod \"kube-proxy-2c2ct\" (UID: \"75ab21b2-1af6-43ce-8b63-08e352e3456d\") " pod="kube-system/kube-proxy-2c2ct" Jul 2 07:59:54.379154 kubelet[1974]: I0702 07:59:54.379141 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-cgroup\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.379411 kubelet[1974]: I0702 07:59:54.379378 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75ab21b2-1af6-43ce-8b63-08e352e3456d-kube-proxy\") pod \"kube-proxy-2c2ct\" (UID: \"75ab21b2-1af6-43ce-8b63-08e352e3456d\") " pod="kube-system/kube-proxy-2c2ct" Jul 2 07:59:54.379477 kubelet[1974]: I0702 07:59:54.379426 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-xtables-lock\") pod \"cilium-vw4w8\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " pod="kube-system/cilium-vw4w8" Jul 2 07:59:54.571111 kubelet[1974]: I0702 07:59:54.571061 1974 topology_manager.go:215] "Topology Admit Handler" podUID="75716295-d2d4-4548-b10e-76833afcf6c9" podNamespace="kube-system" podName="cilium-operator-5cc964979-m28q6" Jul 2 07:59:54.577377 systemd[1]: Created slice kubepods-besteffort-pod75716295_d2d4_4548_b10e_76833afcf6c9.slice. Jul 2 07:59:54.636533 kubelet[1974]: E0702 07:59:54.636479 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.638038 env[1191]: time="2024-07-02T07:59:54.637988944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw4w8,Uid:1e7b2ef7-74b0-4040-9baa-9d4faa04f29d,Namespace:kube-system,Attempt:0,}" Jul 2 07:59:54.653636 kubelet[1974]: E0702 07:59:54.653574 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.654829 env[1191]: time="2024-07-02T07:59:54.654523472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c2ct,Uid:75ab21b2-1af6-43ce-8b63-08e352e3456d,Namespace:kube-system,Attempt:0,}" Jul 2 07:59:54.675274 env[1191]: time="2024-07-02T07:59:54.675137568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:59:54.675274 env[1191]: time="2024-07-02T07:59:54.675257575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:59:54.675564 env[1191]: time="2024-07-02T07:59:54.675294882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:59:54.675921 env[1191]: time="2024-07-02T07:59:54.675844782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623 pid=2070 runtime=io.containerd.runc.v2 Jul 2 07:59:54.685073 env[1191]: time="2024-07-02T07:59:54.684919166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:59:54.685073 env[1191]: time="2024-07-02T07:59:54.684997042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:59:54.685073 env[1191]: time="2024-07-02T07:59:54.685008211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:59:54.687560 env[1191]: time="2024-07-02T07:59:54.687302532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6b5a66eddc2ab362269d4cc241781edc3618b6db1e38e95ff06ff8a2f6d5da5 pid=2090 runtime=io.containerd.runc.v2 Jul 2 07:59:54.696518 kubelet[1974]: I0702 07:59:54.693507 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqsb\" (UniqueName: \"kubernetes.io/projected/75716295-d2d4-4548-b10e-76833afcf6c9-kube-api-access-ljqsb\") pod \"cilium-operator-5cc964979-m28q6\" (UID: \"75716295-d2d4-4548-b10e-76833afcf6c9\") " pod="kube-system/cilium-operator-5cc964979-m28q6" Jul 2 07:59:54.696518 kubelet[1974]: I0702 07:59:54.693624 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75716295-d2d4-4548-b10e-76833afcf6c9-cilium-config-path\") pod \"cilium-operator-5cc964979-m28q6\" (UID: \"75716295-d2d4-4548-b10e-76833afcf6c9\") " pod="kube-system/cilium-operator-5cc964979-m28q6" Jul 2 07:59:54.708388 kubelet[1974]: E0702 07:59:54.703940 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.710554 systemd[1]: Started cri-containerd-a6b5a66eddc2ab362269d4cc241781edc3618b6db1e38e95ff06ff8a2f6d5da5.scope. Jul 2 07:59:54.729447 systemd[1]: Started cri-containerd-c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623.scope. Jul 2 07:59:54.776072 env[1191]: time="2024-07-02T07:59:54.775143927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw4w8,Uid:1e7b2ef7-74b0-4040-9baa-9d4faa04f29d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\"" Jul 2 07:59:54.776601 kubelet[1974]: E0702 07:59:54.776531 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.780506 env[1191]: time="2024-07-02T07:59:54.779202987Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:59:54.786827 env[1191]: time="2024-07-02T07:59:54.786766881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c2ct,Uid:75ab21b2-1af6-43ce-8b63-08e352e3456d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6b5a66eddc2ab362269d4cc241781edc3618b6db1e38e95ff06ff8a2f6d5da5\"" Jul 2 07:59:54.788000 kubelet[1974]: E0702 07:59:54.787966 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.795290 env[1191]: time="2024-07-02T07:59:54.795227820Z" level=info msg="CreateContainer within sandbox \"a6b5a66eddc2ab362269d4cc241781edc3618b6db1e38e95ff06ff8a2f6d5da5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:59:54.829079 env[1191]: time="2024-07-02T07:59:54.828912137Z" level=info msg="CreateContainer within sandbox \"a6b5a66eddc2ab362269d4cc241781edc3618b6db1e38e95ff06ff8a2f6d5da5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5170a30b495c64a7b5e8c837a5dde32d9edcb53722e6be3252abef58b230f75c\"" Jul 2 07:59:54.830684 env[1191]: time="2024-07-02T07:59:54.830645118Z" level=info msg="StartContainer for \"5170a30b495c64a7b5e8c837a5dde32d9edcb53722e6be3252abef58b230f75c\"" Jul 2 07:59:54.860992 systemd[1]: Started cri-containerd-5170a30b495c64a7b5e8c837a5dde32d9edcb53722e6be3252abef58b230f75c.scope. Jul 2 07:59:54.883490 kubelet[1974]: E0702 07:59:54.882837 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:54.885638 env[1191]: time="2024-07-02T07:59:54.885595624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-m28q6,Uid:75716295-d2d4-4548-b10e-76833afcf6c9,Namespace:kube-system,Attempt:0,}" Jul 2 07:59:54.905355 env[1191]: time="2024-07-02T07:59:54.905277827Z" level=info msg="StartContainer for \"5170a30b495c64a7b5e8c837a5dde32d9edcb53722e6be3252abef58b230f75c\" returns successfully" Jul 2 07:59:54.920446 env[1191]: time="2024-07-02T07:59:54.918742501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:59:54.920446 env[1191]: time="2024-07-02T07:59:54.918810036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:59:54.920446 env[1191]: time="2024-07-02T07:59:54.918827853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:59:54.920446 env[1191]: time="2024-07-02T07:59:54.919062327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b pid=2185 runtime=io.containerd.runc.v2 Jul 2 07:59:54.955031 systemd[1]: Started cri-containerd-11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b.scope. Jul 2 07:59:55.021121 env[1191]: time="2024-07-02T07:59:55.021070257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-m28q6,Uid:75716295-d2d4-4548-b10e-76833afcf6c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b\"" Jul 2 07:59:55.022916 kubelet[1974]: E0702 07:59:55.022256 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:55.767269 kubelet[1974]: E0702 07:59:55.767226 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 07:59:55.783506 kubelet[1974]: I0702 07:59:55.783457 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2c2ct" podStartSLOduration=1.783403023 podStartE2EDuration="1.783403023s" podCreationTimestamp="2024-07-02 07:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:59:55.7827924 +0000 UTC m=+12.404324414" watchObservedRunningTime="2024-07-02 07:59:55.783403023 +0000 UTC m=+12.404935038" Jul 2 08:00:05.099222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017039761.mount: Deactivated successfully. Jul 2 08:00:10.654760 env[1191]: time="2024-07-02T08:00:10.654646550Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:00:10.661629 env[1191]: time="2024-07-02T08:00:10.661445205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:00:10.672556 env[1191]: time="2024-07-02T08:00:10.670756656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:00:10.672556 env[1191]: time="2024-07-02T08:00:10.671203999Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:00:10.676579 env[1191]: time="2024-07-02T08:00:10.676506897Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:00:10.684848 env[1191]: time="2024-07-02T08:00:10.683519775Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:00:10.727561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982061804.mount: Deactivated successfully. Jul 2 08:00:10.741899 env[1191]: time="2024-07-02T08:00:10.741788779Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb\"" Jul 2 08:00:10.745680 env[1191]: time="2024-07-02T08:00:10.744942005Z" level=info msg="StartContainer for \"e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb\"" Jul 2 08:00:10.821554 systemd[1]: Started cri-containerd-e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb.scope. Jul 2 08:00:10.981240 env[1191]: time="2024-07-02T08:00:10.981080042Z" level=info msg="StartContainer for \"e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb\" returns successfully" Jul 2 08:00:10.994717 systemd[1]: cri-containerd-e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb.scope: Deactivated successfully. Jul 2 08:00:11.048760 env[1191]: time="2024-07-02T08:00:11.048675929Z" level=info msg="shim disconnected" id=e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb Jul 2 08:00:11.048760 env[1191]: time="2024-07-02T08:00:11.048759307Z" level=warning msg="cleaning up after shim disconnected" id=e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb namespace=k8s.io Jul 2 08:00:11.048760 env[1191]: time="2024-07-02T08:00:11.048775137Z" level=info msg="cleaning up dead shim" Jul 2 08:00:11.070882 env[1191]: time="2024-07-02T08:00:11.070818415Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:00:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2395 runtime=io.containerd.runc.v2\n" Jul 2 08:00:11.717539 systemd[1]: run-containerd-runc-k8s.io-e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb-runc.Gvpj7u.mount: Deactivated successfully. Jul 2 08:00:11.717705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb-rootfs.mount: Deactivated successfully. Jul 2 08:00:11.865494 kubelet[1974]: E0702 08:00:11.865448 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:11.873858 env[1191]: time="2024-07-02T08:00:11.873786580Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:00:11.910018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885112919.mount: Deactivated successfully. Jul 2 08:00:11.932992 env[1191]: time="2024-07-02T08:00:11.932912393Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d\"" Jul 2 08:00:11.939384 env[1191]: time="2024-07-02T08:00:11.934948700Z" level=info msg="StartContainer for \"b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d\"" Jul 2 08:00:12.008673 systemd[1]: Started cri-containerd-b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d.scope. Jul 2 08:00:12.082089 env[1191]: time="2024-07-02T08:00:12.082012758Z" level=info msg="StartContainer for \"b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d\" returns successfully" Jul 2 08:00:12.102886 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:00:12.103274 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:00:12.105527 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:00:12.108821 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:00:12.119430 systemd[1]: cri-containerd-b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d.scope: Deactivated successfully. Jul 2 08:00:12.152626 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:00:12.193075 env[1191]: time="2024-07-02T08:00:12.193001157Z" level=info msg="shim disconnected" id=b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d Jul 2 08:00:12.193735 env[1191]: time="2024-07-02T08:00:12.193677653Z" level=warning msg="cleaning up after shim disconnected" id=b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d namespace=k8s.io Jul 2 08:00:12.193999 env[1191]: time="2024-07-02T08:00:12.193953356Z" level=info msg="cleaning up dead shim" Jul 2 08:00:12.211602 env[1191]: time="2024-07-02T08:00:12.211529607Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:00:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2462 runtime=io.containerd.runc.v2\n" Jul 2 08:00:12.717047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d-rootfs.mount: Deactivated successfully. Jul 2 08:00:12.869996 kubelet[1974]: E0702 08:00:12.869947 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:12.877404 env[1191]: time="2024-07-02T08:00:12.877064132Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:00:12.934545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543092992.mount: Deactivated successfully. Jul 2 08:00:12.947075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19883192.mount: Deactivated successfully. Jul 2 08:00:12.964474 env[1191]: time="2024-07-02T08:00:12.964397059Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693\"" Jul 2 08:00:12.974972 env[1191]: time="2024-07-02T08:00:12.973636610Z" level=info msg="StartContainer for \"94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693\"" Jul 2 08:00:13.060847 systemd[1]: Started cri-containerd-94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693.scope. Jul 2 08:00:13.211583 systemd[1]: cri-containerd-94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693.scope: Deactivated successfully. Jul 2 08:00:13.216003 env[1191]: time="2024-07-02T08:00:13.215940971Z" level=info msg="StartContainer for \"94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693\" returns successfully" Jul 2 08:00:13.304041 env[1191]: time="2024-07-02T08:00:13.303864771Z" level=info msg="shim disconnected" id=94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693 Jul 2 08:00:13.304752 env[1191]: time="2024-07-02T08:00:13.304706942Z" level=warning msg="cleaning up after shim disconnected" id=94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693 namespace=k8s.io Jul 2 08:00:13.305096 env[1191]: time="2024-07-02T08:00:13.305064493Z" level=info msg="cleaning up dead shim" Jul 2 08:00:13.340093 env[1191]: time="2024-07-02T08:00:13.340022191Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:00:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2521 runtime=io.containerd.runc.v2\n" Jul 2 08:00:13.883480 kubelet[1974]: E0702 08:00:13.882990 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:13.900139 env[1191]: time="2024-07-02T08:00:13.900020965Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:00:13.946734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895327257.mount: Deactivated successfully. Jul 2 08:00:13.970237 env[1191]: time="2024-07-02T08:00:13.970125902Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b\"" Jul 2 08:00:13.973781 env[1191]: time="2024-07-02T08:00:13.973719263Z" level=info msg="StartContainer for \"e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b\"" Jul 2 08:00:14.040743 systemd[1]: Started cri-containerd-e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b.scope. Jul 2 08:00:14.191110 systemd[1]: cri-containerd-e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b.scope: Deactivated successfully. Jul 2 08:00:14.195506 env[1191]: time="2024-07-02T08:00:14.195081864Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7b2ef7_74b0_4040_9baa_9d4faa04f29d.slice/cri-containerd-e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b.scope/memory.events\": no such file or directory" Jul 2 08:00:14.203520 env[1191]: time="2024-07-02T08:00:14.203455460Z" level=info msg="StartContainer for \"e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b\" returns successfully" Jul 2 08:00:14.270029 env[1191]: time="2024-07-02T08:00:14.269935385Z" level=info msg="shim disconnected" id=e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b Jul 2 08:00:14.270833 env[1191]: time="2024-07-02T08:00:14.270780884Z" level=warning msg="cleaning up after shim disconnected" id=e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b namespace=k8s.io Jul 2 08:00:14.271043 env[1191]: time="2024-07-02T08:00:14.271011839Z" level=info msg="cleaning up dead shim" Jul 2 08:00:14.298752 env[1191]: time="2024-07-02T08:00:14.298681781Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:00:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2578 runtime=io.containerd.runc.v2\n" Jul 2 08:00:14.405700 env[1191]: time="2024-07-02T08:00:14.405608660Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:00:14.411419 env[1191]: time="2024-07-02T08:00:14.411146108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:00:14.414368 env[1191]: time="2024-07-02T08:00:14.414295568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:00:14.415648 env[1191]: time="2024-07-02T08:00:14.415585217Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:00:14.421855 env[1191]: time="2024-07-02T08:00:14.421786839Z" level=info msg="CreateContainer within sandbox \"11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:00:14.461871 env[1191]: time="2024-07-02T08:00:14.461018311Z" level=info msg="CreateContainer within sandbox \"11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\"" Jul 2 08:00:14.464877 env[1191]: time="2024-07-02T08:00:14.464819653Z" level=info msg="StartContainer for \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\"" Jul 2 08:00:14.517128 systemd[1]: Started cri-containerd-aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1.scope. Jul 2 08:00:14.581813 env[1191]: time="2024-07-02T08:00:14.581714558Z" level=info msg="StartContainer for \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\" returns successfully" Jul 2 08:00:14.722139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b-rootfs.mount: Deactivated successfully. Jul 2 08:00:14.882006 kubelet[1974]: E0702 08:00:14.881971 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:14.886921 kubelet[1974]: E0702 08:00:14.886874 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:14.891154 env[1191]: time="2024-07-02T08:00:14.891072237Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:00:14.933786 env[1191]: time="2024-07-02T08:00:14.933701053Z" level=info msg="CreateContainer within sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\"" Jul 2 08:00:14.935128 env[1191]: time="2024-07-02T08:00:14.935052690Z" level=info msg="StartContainer for \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\"" Jul 2 08:00:14.992626 systemd[1]: Started cri-containerd-1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6.scope. Jul 2 08:00:15.141295 kubelet[1974]: I0702 08:00:15.141231 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-m28q6" podStartSLOduration=1.748633367 podStartE2EDuration="21.141163305s" podCreationTimestamp="2024-07-02 07:59:54 +0000 UTC" firstStartedPulling="2024-07-02 07:59:55.023562492 +0000 UTC m=+11.645094468" lastFinishedPulling="2024-07-02 08:00:14.416092421 +0000 UTC m=+31.037624406" observedRunningTime="2024-07-02 08:00:15.069470345 +0000 UTC m=+31.691002340" watchObservedRunningTime="2024-07-02 08:00:15.141163305 +0000 UTC m=+31.762695305" Jul 2 08:00:15.224083 env[1191]: time="2024-07-02T08:00:15.224014031Z" level=info msg="StartContainer for \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\" returns successfully" Jul 2 08:00:15.668944 kubelet[1974]: I0702 08:00:15.667482 1974 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:00:15.719509 systemd[1]: run-containerd-runc-k8s.io-1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6-runc.Qw33zx.mount: Deactivated successfully. Jul 2 08:00:15.873714 kubelet[1974]: I0702 08:00:15.873658 1974 topology_manager.go:215] "Topology Admit Handler" podUID="12f222f3-abb8-4001-8550-4bbd9751b950" podNamespace="kube-system" podName="coredns-76f75df574-l8pdr" Jul 2 08:00:15.877264 kubelet[1974]: I0702 08:00:15.877227 1974 topology_manager.go:215] "Topology Admit Handler" podUID="065cc299-c007-4d16-b87e-6cc4eb3e17e7" podNamespace="kube-system" podName="coredns-76f75df574-79c4t" Jul 2 08:00:15.882839 systemd[1]: Created slice kubepods-burstable-pod12f222f3_abb8_4001_8550_4bbd9751b950.slice. Jul 2 08:00:15.896356 systemd[1]: Created slice kubepods-burstable-pod065cc299_c007_4d16_b87e_6cc4eb3e17e7.slice. Jul 2 08:00:15.903251 kubelet[1974]: E0702 08:00:15.903214 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:15.906472 kubelet[1974]: E0702 08:00:15.905297 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:15.915105 kubelet[1974]: I0702 08:00:15.915000 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwd6k\" (UniqueName: \"kubernetes.io/projected/12f222f3-abb8-4001-8550-4bbd9751b950-kube-api-access-hwd6k\") pod \"coredns-76f75df574-l8pdr\" (UID: \"12f222f3-abb8-4001-8550-4bbd9751b950\") " pod="kube-system/coredns-76f75df574-l8pdr" Jul 2 08:00:15.915695 kubelet[1974]: I0702 08:00:15.915617 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/065cc299-c007-4d16-b87e-6cc4eb3e17e7-config-volume\") pod \"coredns-76f75df574-79c4t\" (UID: \"065cc299-c007-4d16-b87e-6cc4eb3e17e7\") " pod="kube-system/coredns-76f75df574-79c4t" Jul 2 08:00:15.915879 kubelet[1974]: I0702 08:00:15.915863 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12f222f3-abb8-4001-8550-4bbd9751b950-config-volume\") pod \"coredns-76f75df574-l8pdr\" (UID: \"12f222f3-abb8-4001-8550-4bbd9751b950\") " pod="kube-system/coredns-76f75df574-l8pdr" Jul 2 08:00:15.916088 kubelet[1974]: I0702 08:00:15.916068 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h656h\" (UniqueName: \"kubernetes.io/projected/065cc299-c007-4d16-b87e-6cc4eb3e17e7-kube-api-access-h656h\") pod \"coredns-76f75df574-79c4t\" (UID: \"065cc299-c007-4d16-b87e-6cc4eb3e17e7\") " pod="kube-system/coredns-76f75df574-79c4t" Jul 2 08:00:16.208283 kubelet[1974]: E0702 08:00:16.208233 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:16.210575 env[1191]: time="2024-07-02T08:00:16.210493104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-79c4t,Uid:065cc299-c007-4d16-b87e-6cc4eb3e17e7,Namespace:kube-system,Attempt:0,}" Jul 2 08:00:16.490107 kubelet[1974]: E0702 08:00:16.489297 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:16.490739 env[1191]: time="2024-07-02T08:00:16.490670225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l8pdr,Uid:12f222f3-abb8-4001-8550-4bbd9751b950,Namespace:kube-system,Attempt:0,}" Jul 2 08:00:16.907169 kubelet[1974]: E0702 08:00:16.907110 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:17.909144 kubelet[1974]: E0702 08:00:17.909105 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:18.601472 systemd-networkd[998]: cilium_host: Link UP Jul 2 08:00:18.605483 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 08:00:18.605663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:00:18.605798 systemd-networkd[998]: cilium_net: Link UP Jul 2 08:00:18.606200 systemd-networkd[998]: cilium_net: Gained carrier Jul 2 08:00:18.606732 systemd-networkd[998]: cilium_host: Gained carrier Jul 2 08:00:18.726543 systemd-networkd[998]: cilium_net: Gained IPv6LL Jul 2 08:00:18.831057 systemd-networkd[998]: cilium_vxlan: Link UP Jul 2 08:00:18.831070 systemd-networkd[998]: cilium_vxlan: Gained carrier Jul 2 08:00:19.402419 kernel: NET: Registered PF_ALG protocol family Jul 2 08:00:19.549586 systemd-networkd[998]: cilium_host: Gained IPv6LL Jul 2 08:00:20.415056 systemd-networkd[998]: lxc_health: Link UP Jul 2 08:00:20.424945 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:00:20.424823 systemd-networkd[998]: lxc_health: Gained carrier Jul 2 08:00:20.639475 systemd-networkd[998]: cilium_vxlan: Gained IPv6LL Jul 2 08:00:20.653818 kubelet[1974]: E0702 08:00:20.653778 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:20.679212 kubelet[1974]: I0702 08:00:20.679040 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vw4w8" podStartSLOduration=10.78394115 podStartE2EDuration="26.678982245s" podCreationTimestamp="2024-07-02 07:59:54 +0000 UTC" firstStartedPulling="2024-07-02 07:59:54.778570873 +0000 UTC m=+11.400102849" lastFinishedPulling="2024-07-02 08:00:10.673611954 +0000 UTC m=+27.295143944" observedRunningTime="2024-07-02 08:00:16.231171251 +0000 UTC m=+32.852703250" watchObservedRunningTime="2024-07-02 08:00:20.678982245 +0000 UTC m=+37.300514244" Jul 2 08:00:20.816552 systemd-networkd[998]: lxcba4cb23a2ade: Link UP Jul 2 08:00:20.833360 kernel: eth0: renamed from tmpc4392 Jul 2 08:00:20.846452 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba4cb23a2ade: link becomes ready Jul 2 08:00:20.847506 systemd-networkd[998]: lxcba4cb23a2ade: Gained carrier Jul 2 08:00:20.917888 kubelet[1974]: E0702 08:00:20.917845 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:21.080010 systemd-networkd[998]: lxc69874bfa974b: Link UP Jul 2 08:00:21.086364 kernel: eth0: renamed from tmp36906 Jul 2 08:00:21.091421 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc69874bfa974b: link becomes ready Jul 2 08:00:21.091486 systemd-networkd[998]: lxc69874bfa974b: Gained carrier Jul 2 08:00:22.109597 systemd-networkd[998]: lxc_health: Gained IPv6LL Jul 2 08:00:22.173629 systemd-networkd[998]: lxc69874bfa974b: Gained IPv6LL Jul 2 08:00:22.557653 systemd-networkd[998]: lxcba4cb23a2ade: Gained IPv6LL Jul 2 08:00:27.931241 env[1191]: time="2024-07-02T08:00:27.930995424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:00:27.931241 env[1191]: time="2024-07-02T08:00:27.931236541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:00:27.932035 env[1191]: time="2024-07-02T08:00:27.931275092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:00:27.932376 env[1191]: time="2024-07-02T08:00:27.932252737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/369069f5a0c3f1ceef400430d683f2a2d12b9fd22e5a8f0c8768836feec1e4c0 pid=3170 runtime=io.containerd.runc.v2 Jul 2 08:00:28.000791 systemd[1]: run-containerd-runc-k8s.io-369069f5a0c3f1ceef400430d683f2a2d12b9fd22e5a8f0c8768836feec1e4c0-runc.KkkRBl.mount: Deactivated successfully. Jul 2 08:00:28.007993 systemd[1]: Started cri-containerd-369069f5a0c3f1ceef400430d683f2a2d12b9fd22e5a8f0c8768836feec1e4c0.scope. Jul 2 08:00:28.027424 env[1191]: time="2024-07-02T08:00:28.027236169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:00:28.027424 env[1191]: time="2024-07-02T08:00:28.027306025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:00:28.027826 env[1191]: time="2024-07-02T08:00:28.027748982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:00:28.029364 env[1191]: time="2024-07-02T08:00:28.028215892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c43921d131155a67c30d64505dc9924197afd39db9fae8af1aea043ac61941ad pid=3198 runtime=io.containerd.runc.v2 Jul 2 08:00:28.069836 systemd[1]: Started cri-containerd-c43921d131155a67c30d64505dc9924197afd39db9fae8af1aea043ac61941ad.scope. Jul 2 08:00:28.115282 env[1191]: time="2024-07-02T08:00:28.115210057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l8pdr,Uid:12f222f3-abb8-4001-8550-4bbd9751b950,Namespace:kube-system,Attempt:0,} returns sandbox id \"369069f5a0c3f1ceef400430d683f2a2d12b9fd22e5a8f0c8768836feec1e4c0\"" Jul 2 08:00:28.116876 kubelet[1974]: E0702 08:00:28.116841 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:28.120200 env[1191]: time="2024-07-02T08:00:28.120143933Z" level=info msg="CreateContainer within sandbox \"369069f5a0c3f1ceef400430d683f2a2d12b9fd22e5a8f0c8768836feec1e4c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:00:28.157350 env[1191]: time="2024-07-02T08:00:28.157269623Z" level=info msg="CreateContainer within sandbox \"369069f5a0c3f1ceef400430d683f2a2d12b9fd22e5a8f0c8768836feec1e4c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b868db75437c0bf7d89fd7e53bc599ad2bf4ba350e73f50ee3bde07c4c1629d\"" Jul 2 08:00:28.164678 env[1191]: time="2024-07-02T08:00:28.164620260Z" level=info msg="StartContainer for \"3b868db75437c0bf7d89fd7e53bc599ad2bf4ba350e73f50ee3bde07c4c1629d\"" Jul 2 08:00:28.201208 systemd[1]: Started cri-containerd-3b868db75437c0bf7d89fd7e53bc599ad2bf4ba350e73f50ee3bde07c4c1629d.scope. Jul 2 08:00:28.242789 env[1191]: time="2024-07-02T08:00:28.242727904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-79c4t,Uid:065cc299-c007-4d16-b87e-6cc4eb3e17e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c43921d131155a67c30d64505dc9924197afd39db9fae8af1aea043ac61941ad\"" Jul 2 08:00:28.243830 kubelet[1974]: E0702 08:00:28.243776 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:28.250601 env[1191]: time="2024-07-02T08:00:28.250286367Z" level=info msg="CreateContainer within sandbox \"c43921d131155a67c30d64505dc9924197afd39db9fae8af1aea043ac61941ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:00:28.286937 env[1191]: time="2024-07-02T08:00:28.286847717Z" level=info msg="CreateContainer within sandbox \"c43921d131155a67c30d64505dc9924197afd39db9fae8af1aea043ac61941ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f78075c5199df00fa84940525a4a3654ff1b51f0246c1282f6eeee407c1afb1\"" Jul 2 08:00:28.288139 env[1191]: time="2024-07-02T08:00:28.288090123Z" level=info msg="StartContainer for \"6f78075c5199df00fa84940525a4a3654ff1b51f0246c1282f6eeee407c1afb1\"" Jul 2 08:00:28.289215 env[1191]: time="2024-07-02T08:00:28.289169126Z" level=info msg="StartContainer for \"3b868db75437c0bf7d89fd7e53bc599ad2bf4ba350e73f50ee3bde07c4c1629d\" returns successfully" Jul 2 08:00:28.316712 systemd[1]: Started cri-containerd-6f78075c5199df00fa84940525a4a3654ff1b51f0246c1282f6eeee407c1afb1.scope. Jul 2 08:00:28.386096 env[1191]: time="2024-07-02T08:00:28.386018833Z" level=info msg="StartContainer for \"6f78075c5199df00fa84940525a4a3654ff1b51f0246c1282f6eeee407c1afb1\" returns successfully" Jul 2 08:00:28.938719 systemd[1]: run-containerd-runc-k8s.io-c43921d131155a67c30d64505dc9924197afd39db9fae8af1aea043ac61941ad-runc.o63S0p.mount: Deactivated successfully. Jul 2 08:00:28.974493 kubelet[1974]: E0702 08:00:28.974455 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:28.986055 kubelet[1974]: E0702 08:00:28.986018 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:29.005317 kubelet[1974]: I0702 08:00:29.005254 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l8pdr" podStartSLOduration=35.005191385 podStartE2EDuration="35.005191385s" podCreationTimestamp="2024-07-02 07:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:00:29.003618685 +0000 UTC m=+45.625150687" watchObservedRunningTime="2024-07-02 08:00:29.005191385 +0000 UTC m=+45.626723389" Jul 2 08:00:29.988720 kubelet[1974]: E0702 08:00:29.988670 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:29.990212 kubelet[1974]: E0702 08:00:29.990180 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:30.992411 kubelet[1974]: E0702 08:00:30.991237 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:30.993021 kubelet[1974]: E0702 08:00:30.992560 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:00:46.714833 systemd[1]: Started sshd@6-146.190.152.6:22-147.75.109.163:52600.service. Jul 2 08:00:46.808422 sshd[3330]: Accepted publickey for core from 147.75.109.163 port 52600 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:00:46.813202 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:00:46.828044 systemd[1]: Started session-6.scope. Jul 2 08:00:46.832558 systemd-logind[1177]: New session 6 of user core. Jul 2 08:00:47.224469 sshd[3330]: pam_unix(sshd:session): session closed for user core Jul 2 08:00:47.229976 systemd-logind[1177]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:00:47.232474 systemd[1]: sshd@6-146.190.152.6:22-147.75.109.163:52600.service: Deactivated successfully. Jul 2 08:00:47.233729 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:00:47.236714 systemd-logind[1177]: Removed session 6. Jul 2 08:00:52.234441 systemd[1]: Started sshd@7-146.190.152.6:22-147.75.109.163:52606.service. Jul 2 08:00:52.288145 sshd[3344]: Accepted publickey for core from 147.75.109.163 port 52606 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:00:52.291210 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:00:52.300577 systemd[1]: Started session-7.scope. Jul 2 08:00:52.301521 systemd-logind[1177]: New session 7 of user core. Jul 2 08:00:52.508780 sshd[3344]: pam_unix(sshd:session): session closed for user core Jul 2 08:00:52.514543 systemd[1]: sshd@7-146.190.152.6:22-147.75.109.163:52606.service: Deactivated successfully. Jul 2 08:00:52.515733 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:00:52.518638 systemd-logind[1177]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:00:52.520265 systemd-logind[1177]: Removed session 7. Jul 2 08:00:57.048545 systemd[1]: Started sshd@8-146.190.152.6:22-119.96.158.87:54976.service. Jul 2 08:00:57.060974 sshd[3359]: Connection closed by 119.96.158.87 port 54976 [preauth] Jul 2 08:00:57.062410 systemd[1]: sshd@8-146.190.152.6:22-119.96.158.87:54976.service: Deactivated successfully. Jul 2 08:00:57.517366 systemd[1]: Started sshd@9-146.190.152.6:22-147.75.109.163:49582.service. Jul 2 08:00:57.562669 sshd[3363]: Accepted publickey for core from 147.75.109.163 port 49582 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:00:57.564811 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:00:57.571696 systemd[1]: Started session-8.scope. Jul 2 08:00:57.573443 systemd-logind[1177]: New session 8 of user core. Jul 2 08:00:57.726144 sshd[3363]: pam_unix(sshd:session): session closed for user core Jul 2 08:00:57.730815 systemd[1]: sshd@9-146.190.152.6:22-147.75.109.163:49582.service: Deactivated successfully. Jul 2 08:00:57.731755 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:00:57.732826 systemd-logind[1177]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:00:57.734806 systemd-logind[1177]: Removed session 8. Jul 2 08:01:00.703186 kubelet[1974]: E0702 08:01:00.703105 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:02.739639 systemd[1]: Started sshd@10-146.190.152.6:22-147.75.109.163:37686.service. Jul 2 08:01:02.808099 sshd[3380]: Accepted publickey for core from 147.75.109.163 port 37686 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:02.813562 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:02.834062 systemd[1]: Started session-9.scope. Jul 2 08:01:02.834718 systemd-logind[1177]: New session 9 of user core. Jul 2 08:01:03.042253 sshd[3380]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:03.047264 systemd[1]: sshd@10-146.190.152.6:22-147.75.109.163:37686.service: Deactivated successfully. Jul 2 08:01:03.048496 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:01:03.050127 systemd-logind[1177]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:01:03.051664 systemd-logind[1177]: Removed session 9. Jul 2 08:01:08.055092 systemd[1]: Started sshd@11-146.190.152.6:22-147.75.109.163:37690.service. Jul 2 08:01:08.113614 sshd[3393]: Accepted publickey for core from 147.75.109.163 port 37690 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:08.119736 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:08.132199 systemd[1]: Started session-10.scope. Jul 2 08:01:08.132949 systemd-logind[1177]: New session 10 of user core. Jul 2 08:01:08.327743 sshd[3393]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:08.339906 systemd[1]: Started sshd@12-146.190.152.6:22-147.75.109.163:37702.service. Jul 2 08:01:08.340877 systemd[1]: sshd@11-146.190.152.6:22-147.75.109.163:37690.service: Deactivated successfully. Jul 2 08:01:08.342644 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:01:08.347779 systemd-logind[1177]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:01:08.352673 systemd-logind[1177]: Removed session 10. Jul 2 08:01:08.400270 sshd[3405]: Accepted publickey for core from 147.75.109.163 port 37702 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:08.402557 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:08.413446 systemd[1]: Started session-11.scope. Jul 2 08:01:08.414998 systemd-logind[1177]: New session 11 of user core. Jul 2 08:01:08.778676 sshd[3405]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:08.791908 systemd[1]: Started sshd@13-146.190.152.6:22-147.75.109.163:37716.service. Jul 2 08:01:08.794281 systemd[1]: sshd@12-146.190.152.6:22-147.75.109.163:37702.service: Deactivated successfully. Jul 2 08:01:08.797377 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:01:08.806795 systemd-logind[1177]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:01:08.812858 systemd-logind[1177]: Removed session 11. Jul 2 08:01:08.889011 sshd[3415]: Accepted publickey for core from 147.75.109.163 port 37716 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:08.890744 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:08.904119 systemd-logind[1177]: New session 12 of user core. Jul 2 08:01:08.905411 systemd[1]: Started session-12.scope. Jul 2 08:01:09.140756 sshd[3415]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:09.146290 systemd-logind[1177]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:01:09.147277 systemd[1]: sshd@13-146.190.152.6:22-147.75.109.163:37716.service: Deactivated successfully. Jul 2 08:01:09.148851 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:01:09.151838 systemd-logind[1177]: Removed session 12. Jul 2 08:01:14.156830 systemd[1]: Started sshd@14-146.190.152.6:22-147.75.109.163:38344.service. Jul 2 08:01:14.212137 sshd[3428]: Accepted publickey for core from 147.75.109.163 port 38344 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:14.215554 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:14.224520 systemd-logind[1177]: New session 13 of user core. Jul 2 08:01:14.224660 systemd[1]: Started session-13.scope. Jul 2 08:01:14.421621 sshd[3428]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:14.425043 systemd-logind[1177]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:01:14.425458 systemd[1]: sshd@14-146.190.152.6:22-147.75.109.163:38344.service: Deactivated successfully. Jul 2 08:01:14.426307 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:01:14.427845 systemd-logind[1177]: Removed session 13. Jul 2 08:01:14.702147 kubelet[1974]: E0702 08:01:14.702009 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:19.428281 systemd[1]: Started sshd@15-146.190.152.6:22-147.75.109.163:38354.service. Jul 2 08:01:19.476267 sshd[3440]: Accepted publickey for core from 147.75.109.163 port 38354 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:19.478764 sshd[3440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:19.487484 systemd-logind[1177]: New session 14 of user core. Jul 2 08:01:19.489039 systemd[1]: Started session-14.scope. Jul 2 08:01:19.667298 sshd[3440]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:19.672379 systemd[1]: sshd@15-146.190.152.6:22-147.75.109.163:38354.service: Deactivated successfully. Jul 2 08:01:19.673374 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:01:19.675682 systemd-logind[1177]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:01:19.677567 systemd-logind[1177]: Removed session 14. Jul 2 08:01:21.703979 kubelet[1974]: E0702 08:01:21.703934 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:22.702769 kubelet[1974]: E0702 08:01:22.702730 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:22.705224 kubelet[1974]: E0702 08:01:22.704128 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:24.677053 systemd[1]: Started sshd@16-146.190.152.6:22-147.75.109.163:57770.service. Jul 2 08:01:24.733092 sshd[3451]: Accepted publickey for core from 147.75.109.163 port 57770 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:24.735021 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:24.744411 systemd[1]: Started session-15.scope. Jul 2 08:01:24.745145 systemd-logind[1177]: New session 15 of user core. Jul 2 08:01:24.897525 sshd[3451]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:24.905546 systemd[1]: Started sshd@17-146.190.152.6:22-147.75.109.163:57772.service. Jul 2 08:01:24.908197 systemd[1]: sshd@16-146.190.152.6:22-147.75.109.163:57770.service: Deactivated successfully. Jul 2 08:01:24.909220 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:01:24.911027 systemd-logind[1177]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:01:24.913017 systemd-logind[1177]: Removed session 15. Jul 2 08:01:24.952927 sshd[3462]: Accepted publickey for core from 147.75.109.163 port 57772 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:24.957945 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:24.966941 systemd[1]: Started session-16.scope. Jul 2 08:01:24.968043 systemd-logind[1177]: New session 16 of user core. Jul 2 08:01:25.432247 sshd[3462]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:25.439182 systemd[1]: sshd@17-146.190.152.6:22-147.75.109.163:57772.service: Deactivated successfully. Jul 2 08:01:25.440275 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:01:25.442274 systemd-logind[1177]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:01:25.445123 systemd[1]: Started sshd@18-146.190.152.6:22-147.75.109.163:57784.service. Jul 2 08:01:25.448155 systemd-logind[1177]: Removed session 16. Jul 2 08:01:25.526481 sshd[3475]: Accepted publickey for core from 147.75.109.163 port 57784 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:25.528754 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:25.537670 systemd[1]: Started session-17.scope. Jul 2 08:01:25.539487 systemd-logind[1177]: New session 17 of user core. Jul 2 08:01:27.505986 sshd[3475]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:27.515075 systemd[1]: Started sshd@19-146.190.152.6:22-147.75.109.163:57794.service. Jul 2 08:01:27.519556 systemd[1]: sshd@18-146.190.152.6:22-147.75.109.163:57784.service: Deactivated successfully. Jul 2 08:01:27.523124 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:01:27.525438 systemd-logind[1177]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:01:27.527415 systemd-logind[1177]: Removed session 17. Jul 2 08:01:27.585940 sshd[3492]: Accepted publickey for core from 147.75.109.163 port 57794 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:27.588365 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:27.595195 systemd-logind[1177]: New session 18 of user core. Jul 2 08:01:27.595923 systemd[1]: Started session-18.scope. Jul 2 08:01:28.028569 sshd[3492]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:28.035887 systemd[1]: sshd@19-146.190.152.6:22-147.75.109.163:57794.service: Deactivated successfully. Jul 2 08:01:28.036814 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:01:28.042451 systemd-logind[1177]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:01:28.046595 systemd[1]: Started sshd@20-146.190.152.6:22-147.75.109.163:57810.service. Jul 2 08:01:28.051821 systemd-logind[1177]: Removed session 18. Jul 2 08:01:28.091010 sshd[3503]: Accepted publickey for core from 147.75.109.163 port 57810 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:28.093806 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:28.101546 systemd-logind[1177]: New session 19 of user core. Jul 2 08:01:28.102514 systemd[1]: Started session-19.scope. Jul 2 08:01:28.265995 sshd[3503]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:28.269804 systemd[1]: sshd@20-146.190.152.6:22-147.75.109.163:57810.service: Deactivated successfully. Jul 2 08:01:28.271035 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:01:28.272493 systemd-logind[1177]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:01:28.273479 systemd-logind[1177]: Removed session 19. Jul 2 08:01:31.703345 kubelet[1974]: E0702 08:01:31.703253 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:33.278905 systemd[1]: Started sshd@21-146.190.152.6:22-147.75.109.163:49824.service. Jul 2 08:01:33.334819 sshd[3515]: Accepted publickey for core from 147.75.109.163 port 49824 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:33.338780 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:33.349455 systemd-logind[1177]: New session 20 of user core. Jul 2 08:01:33.350044 systemd[1]: Started session-20.scope. Jul 2 08:01:33.562891 sshd[3515]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:33.568641 systemd-logind[1177]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:01:33.571437 systemd[1]: sshd@21-146.190.152.6:22-147.75.109.163:49824.service: Deactivated successfully. Jul 2 08:01:33.572906 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:01:33.576686 systemd-logind[1177]: Removed session 20. Jul 2 08:01:38.572568 systemd[1]: Started sshd@22-146.190.152.6:22-147.75.109.163:49832.service. Jul 2 08:01:38.618676 sshd[3530]: Accepted publickey for core from 147.75.109.163 port 49832 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:38.622180 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:38.630929 systemd[1]: Started session-21.scope. Jul 2 08:01:38.632114 systemd-logind[1177]: New session 21 of user core. Jul 2 08:01:38.793050 sshd[3530]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:38.803352 systemd[1]: sshd@22-146.190.152.6:22-147.75.109.163:49832.service: Deactivated successfully. Jul 2 08:01:38.804519 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:01:38.805753 systemd-logind[1177]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:01:38.807159 systemd-logind[1177]: Removed session 21. Jul 2 08:01:43.803036 systemd[1]: Started sshd@23-146.190.152.6:22-147.75.109.163:42308.service. Jul 2 08:01:43.852885 sshd[3544]: Accepted publickey for core from 147.75.109.163 port 42308 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:43.855607 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:43.863182 systemd[1]: Started session-22.scope. Jul 2 08:01:43.863428 systemd-logind[1177]: New session 22 of user core. Jul 2 08:01:44.043485 sshd[3544]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:44.048239 systemd[1]: sshd@23-146.190.152.6:22-147.75.109.163:42308.service: Deactivated successfully. Jul 2 08:01:44.049119 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:01:44.050834 systemd-logind[1177]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:01:44.051961 systemd-logind[1177]: Removed session 22. Jul 2 08:01:44.702388 kubelet[1974]: E0702 08:01:44.702306 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:49.054536 systemd[1]: Started sshd@24-146.190.152.6:22-147.75.109.163:42320.service. Jul 2 08:01:49.100543 sshd[3555]: Accepted publickey for core from 147.75.109.163 port 42320 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:49.102921 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:49.110041 systemd-logind[1177]: New session 23 of user core. Jul 2 08:01:49.110513 systemd[1]: Started session-23.scope. Jul 2 08:01:49.257419 sshd[3555]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:49.262760 systemd[1]: sshd@24-146.190.152.6:22-147.75.109.163:42320.service: Deactivated successfully. Jul 2 08:01:49.263698 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:01:49.266630 systemd-logind[1177]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:01:49.268089 systemd-logind[1177]: Removed session 23. Jul 2 08:01:54.264999 systemd[1]: Started sshd@25-146.190.152.6:22-147.75.109.163:57632.service. Jul 2 08:01:54.309566 sshd[3567]: Accepted publickey for core from 147.75.109.163 port 57632 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:54.311440 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:54.319173 systemd-logind[1177]: New session 24 of user core. Jul 2 08:01:54.319726 systemd[1]: Started session-24.scope. Jul 2 08:01:54.477396 sshd[3567]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:54.480959 systemd[1]: sshd@25-146.190.152.6:22-147.75.109.163:57632.service: Deactivated successfully. Jul 2 08:01:54.482229 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:01:54.483107 systemd-logind[1177]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:01:54.484614 systemd-logind[1177]: Removed session 24. Jul 2 08:01:59.487798 systemd[1]: Started sshd@26-146.190.152.6:22-147.75.109.163:57644.service. Jul 2 08:01:59.545705 sshd[3581]: Accepted publickey for core from 147.75.109.163 port 57644 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:01:59.549504 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:01:59.559162 systemd-logind[1177]: New session 25 of user core. Jul 2 08:01:59.560861 systemd[1]: Started session-25.scope. Jul 2 08:01:59.703880 kubelet[1974]: E0702 08:01:59.703823 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:01:59.744407 sshd[3581]: pam_unix(sshd:session): session closed for user core Jul 2 08:01:59.752897 systemd[1]: sshd@26-146.190.152.6:22-147.75.109.163:57644.service: Deactivated successfully. Jul 2 08:01:59.754147 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:01:59.756915 systemd-logind[1177]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:01:59.761096 systemd-logind[1177]: Removed session 25. Jul 2 08:02:04.754926 systemd[1]: Started sshd@27-146.190.152.6:22-147.75.109.163:48654.service. Jul 2 08:02:04.808666 sshd[3594]: Accepted publickey for core from 147.75.109.163 port 48654 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:02:04.810967 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:04.818385 systemd[1]: Started session-26.scope. Jul 2 08:02:04.819172 systemd-logind[1177]: New session 26 of user core. Jul 2 08:02:04.981443 sshd[3594]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:04.988828 systemd[1]: sshd@27-146.190.152.6:22-147.75.109.163:48654.service: Deactivated successfully. Jul 2 08:02:04.990183 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:02:04.993155 systemd-logind[1177]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:02:04.996641 systemd[1]: Started sshd@28-146.190.152.6:22-147.75.109.163:48658.service. Jul 2 08:02:05.002380 systemd-logind[1177]: Removed session 26. Jul 2 08:02:05.048818 sshd[3605]: Accepted publickey for core from 147.75.109.163 port 48658 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:02:05.051676 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:05.056996 systemd-logind[1177]: New session 27 of user core. Jul 2 08:02:05.058469 systemd[1]: Started session-27.scope. Jul 2 08:02:05.176038 systemd[1]: Started sshd@29-146.190.152.6:22-87.251.88.6:48742.service. Jul 2 08:02:06.162012 sshd[3613]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=87.251.88.6 user=root Jul 2 08:02:06.983682 kubelet[1974]: I0702 08:02:06.983614 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-79c4t" podStartSLOduration=132.981239707 podStartE2EDuration="2m12.981239707s" podCreationTimestamp="2024-07-02 07:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:00:29.04539847 +0000 UTC m=+45.666930528" watchObservedRunningTime="2024-07-02 08:02:06.981239707 +0000 UTC m=+143.602771706" Jul 2 08:02:07.067025 env[1191]: time="2024-07-02T08:02:07.066924916Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:02:07.083195 env[1191]: time="2024-07-02T08:02:07.082828679Z" level=info msg="StopContainer for \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\" with timeout 2 (s)" Jul 2 08:02:07.083195 env[1191]: time="2024-07-02T08:02:07.082981261Z" level=info msg="StopContainer for \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\" with timeout 30 (s)" Jul 2 08:02:07.084061 env[1191]: time="2024-07-02T08:02:07.083512006Z" level=info msg="Stop container \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\" with signal terminated" Jul 2 08:02:07.084061 env[1191]: time="2024-07-02T08:02:07.083564234Z" level=info msg="Stop container \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\" with signal terminated" Jul 2 08:02:07.092843 systemd-networkd[998]: lxc_health: Link DOWN Jul 2 08:02:07.092851 systemd-networkd[998]: lxc_health: Lost carrier Jul 2 08:02:07.107236 systemd[1]: cri-containerd-aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1.scope: Deactivated successfully. Jul 2 08:02:07.143157 systemd[1]: cri-containerd-1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6.scope: Deactivated successfully. Jul 2 08:02:07.143530 systemd[1]: cri-containerd-1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6.scope: Consumed 11.490s CPU time. Jul 2 08:02:07.174836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1-rootfs.mount: Deactivated successfully. Jul 2 08:02:07.198264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6-rootfs.mount: Deactivated successfully. Jul 2 08:02:07.202612 env[1191]: time="2024-07-02T08:02:07.202553259Z" level=info msg="shim disconnected" id=1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6 Jul 2 08:02:07.204100 env[1191]: time="2024-07-02T08:02:07.204048398Z" level=warning msg="cleaning up after shim disconnected" id=1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6 namespace=k8s.io Jul 2 08:02:07.204368 env[1191]: time="2024-07-02T08:02:07.204313670Z" level=info msg="cleaning up dead shim" Jul 2 08:02:07.204831 env[1191]: time="2024-07-02T08:02:07.203938298Z" level=info msg="shim disconnected" id=aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1 Jul 2 08:02:07.204989 env[1191]: time="2024-07-02T08:02:07.204965733Z" level=warning msg="cleaning up after shim disconnected" id=aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1 namespace=k8s.io Jul 2 08:02:07.205097 env[1191]: time="2024-07-02T08:02:07.205080218Z" level=info msg="cleaning up dead shim" Jul 2 08:02:07.220175 env[1191]: time="2024-07-02T08:02:07.220083808Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\n" Jul 2 08:02:07.224832 env[1191]: time="2024-07-02T08:02:07.224775235Z" level=info msg="StopContainer for \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\" returns successfully" Jul 2 08:02:07.225876 env[1191]: time="2024-07-02T08:02:07.225835364Z" level=info msg="StopPodSandbox for \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\"" Jul 2 08:02:07.226218 env[1191]: time="2024-07-02T08:02:07.226188020Z" level=info msg="Container to stop \"e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:02:07.226357 env[1191]: time="2024-07-02T08:02:07.226308783Z" level=info msg="Container to stop \"94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:02:07.226459 env[1191]: time="2024-07-02T08:02:07.226439073Z" level=info msg="Container to stop \"e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:02:07.226549 env[1191]: time="2024-07-02T08:02:07.226531343Z" level=info msg="Container to stop \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:02:07.226638 env[1191]: time="2024-07-02T08:02:07.226620734Z" level=info msg="Container to stop \"b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:02:07.229215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623-shm.mount: Deactivated successfully. Jul 2 08:02:07.236084 env[1191]: time="2024-07-02T08:02:07.234835288Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3673 runtime=io.containerd.runc.v2\n" Jul 2 08:02:07.241118 systemd[1]: cri-containerd-c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623.scope: Deactivated successfully. Jul 2 08:02:07.243423 env[1191]: time="2024-07-02T08:02:07.243313348Z" level=info msg="StopContainer for \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\" returns successfully" Jul 2 08:02:07.244307 env[1191]: time="2024-07-02T08:02:07.244259501Z" level=info msg="StopPodSandbox for \"11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b\"" Jul 2 08:02:07.245014 env[1191]: time="2024-07-02T08:02:07.244929952Z" level=info msg="Container to stop \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:02:07.248494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b-shm.mount: Deactivated successfully. Jul 2 08:02:07.263921 systemd[1]: cri-containerd-11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b.scope: Deactivated successfully. Jul 2 08:02:07.306659 env[1191]: time="2024-07-02T08:02:07.306593737Z" level=info msg="shim disconnected" id=c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623 Jul 2 08:02:07.308202 env[1191]: time="2024-07-02T08:02:07.308149418Z" level=warning msg="cleaning up after shim disconnected" id=c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623 namespace=k8s.io Jul 2 08:02:07.308545 env[1191]: time="2024-07-02T08:02:07.308502527Z" level=info msg="cleaning up dead shim" Jul 2 08:02:07.324860 env[1191]: time="2024-07-02T08:02:07.324796804Z" level=info msg="shim disconnected" id=11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b Jul 2 08:02:07.326467 env[1191]: time="2024-07-02T08:02:07.326412742Z" level=warning msg="cleaning up after shim disconnected" id=11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b namespace=k8s.io Jul 2 08:02:07.326613 env[1191]: time="2024-07-02T08:02:07.326459930Z" level=info msg="cleaning up dead shim" Jul 2 08:02:07.326699 env[1191]: time="2024-07-02T08:02:07.326668945Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3735 runtime=io.containerd.runc.v2\n" Jul 2 08:02:07.327372 env[1191]: time="2024-07-02T08:02:07.327301166Z" level=info msg="TearDown network for sandbox \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" successfully" Jul 2 08:02:07.327542 env[1191]: time="2024-07-02T08:02:07.327516864Z" level=info msg="StopPodSandbox for \"c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623\" returns successfully" Jul 2 08:02:07.355089 env[1191]: time="2024-07-02T08:02:07.355029695Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3749 runtime=io.containerd.runc.v2\n" Jul 2 08:02:07.359481 env[1191]: time="2024-07-02T08:02:07.358413507Z" level=info msg="TearDown network for sandbox \"11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b\" successfully" Jul 2 08:02:07.359481 env[1191]: time="2024-07-02T08:02:07.358478703Z" level=info msg="StopPodSandbox for \"11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b\" returns successfully" Jul 2 08:02:07.434408 kubelet[1974]: I0702 08:02:07.434267 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cni-path\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.434765 kubelet[1974]: I0702 08:02:07.434742 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-etc-cni-netd\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.434899 kubelet[1974]: I0702 08:02:07.434885 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-net\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435007 kubelet[1974]: I0702 08:02:07.434996 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-kernel\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435120 kubelet[1974]: I0702 08:02:07.435108 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-clustermesh-secrets\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435235 kubelet[1974]: I0702 08:02:07.435224 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljqsb\" (UniqueName: \"kubernetes.io/projected/75716295-d2d4-4548-b10e-76833afcf6c9-kube-api-access-ljqsb\") pod \"75716295-d2d4-4548-b10e-76833afcf6c9\" (UID: \"75716295-d2d4-4548-b10e-76833afcf6c9\") " Jul 2 08:02:07.435314 kubelet[1974]: I0702 08:02:07.435304 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hostproc\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435423 kubelet[1974]: I0702 08:02:07.435411 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hubble-tls\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435519 kubelet[1974]: I0702 08:02:07.435510 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-xtables-lock\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435599 kubelet[1974]: I0702 08:02:07.435587 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-cgroup\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435701 kubelet[1974]: I0702 08:02:07.435686 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-lib-modules\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.435881 kubelet[1974]: I0702 08:02:07.435864 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-config-path\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.436009 kubelet[1974]: I0702 08:02:07.435997 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-run\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.436100 kubelet[1974]: I0702 08:02:07.436087 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgwhs\" (UniqueName: \"kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-kube-api-access-qgwhs\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.436184 kubelet[1974]: I0702 08:02:07.436165 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75716295-d2d4-4548-b10e-76833afcf6c9-cilium-config-path\") pod \"75716295-d2d4-4548-b10e-76833afcf6c9\" (UID: \"75716295-d2d4-4548-b10e-76833afcf6c9\") " Jul 2 08:02:07.436288 kubelet[1974]: I0702 08:02:07.436277 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-bpf-maps\") pod \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\" (UID: \"1e7b2ef7-74b0-4040-9baa-9d4faa04f29d\") " Jul 2 08:02:07.436450 kubelet[1974]: I0702 08:02:07.436422 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.436588 kubelet[1974]: I0702 08:02:07.436569 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.436725 kubelet[1974]: I0702 08:02:07.436708 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.436817 kubelet[1974]: I0702 08:02:07.436802 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.437311 kubelet[1974]: I0702 08:02:07.434395 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.437435 kubelet[1974]: I0702 08:02:07.437380 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.443199 kubelet[1974]: I0702 08:02:07.443149 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:02:07.443515 kubelet[1974]: I0702 08:02:07.443492 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.445577 kubelet[1974]: I0702 08:02:07.445532 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.446909 kubelet[1974]: I0702 08:02:07.445762 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.446909 kubelet[1974]: I0702 08:02:07.446843 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:07.447785 kubelet[1974]: I0702 08:02:07.447705 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:02:07.448140 kubelet[1974]: I0702 08:02:07.448105 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75716295-d2d4-4548-b10e-76833afcf6c9-kube-api-access-ljqsb" (OuterVolumeSpecName: "kube-api-access-ljqsb") pod "75716295-d2d4-4548-b10e-76833afcf6c9" (UID: "75716295-d2d4-4548-b10e-76833afcf6c9"). InnerVolumeSpecName "kube-api-access-ljqsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:02:07.451443 kubelet[1974]: I0702 08:02:07.451398 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75716295-d2d4-4548-b10e-76833afcf6c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75716295-d2d4-4548-b10e-76833afcf6c9" (UID: "75716295-d2d4-4548-b10e-76833afcf6c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:02:07.452124 kubelet[1974]: I0702 08:02:07.452090 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-kube-api-access-qgwhs" (OuterVolumeSpecName: "kube-api-access-qgwhs") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "kube-api-access-qgwhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:02:07.455186 kubelet[1974]: I0702 08:02:07.455131 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" (UID: "1e7b2ef7-74b0-4040-9baa-9d4faa04f29d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:02:07.537030 kubelet[1974]: I0702 08:02:07.536835 1974 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hostproc\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.537461 kubelet[1974]: I0702 08:02:07.537433 1974 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-hubble-tls\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.537636 kubelet[1974]: I0702 08:02:07.537622 1974 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-xtables-lock\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.537760 kubelet[1974]: I0702 08:02:07.537749 1974 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ljqsb\" (UniqueName: \"kubernetes.io/projected/75716295-d2d4-4548-b10e-76833afcf6c9-kube-api-access-ljqsb\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.537866 kubelet[1974]: I0702 08:02:07.537843 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-cgroup\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538019 kubelet[1974]: I0702 08:02:07.538000 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-config-path\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538158 kubelet[1974]: I0702 08:02:07.538142 1974 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-lib-modules\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538259 kubelet[1974]: I0702 08:02:07.538249 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cilium-run\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538359 kubelet[1974]: I0702 08:02:07.538349 1974 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qgwhs\" (UniqueName: \"kubernetes.io/projected/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-kube-api-access-qgwhs\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538451 kubelet[1974]: I0702 08:02:07.538440 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75716295-d2d4-4548-b10e-76833afcf6c9-cilium-config-path\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538544 kubelet[1974]: I0702 08:02:07.538534 1974 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-bpf-maps\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538637 kubelet[1974]: I0702 08:02:07.538625 1974 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-clustermesh-secrets\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538750 kubelet[1974]: I0702 08:02:07.538737 1974 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-cni-path\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538851 kubelet[1974]: I0702 08:02:07.538840 1974 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-etc-cni-netd\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.538936 kubelet[1974]: I0702 08:02:07.538926 1974 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-net\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.539018 kubelet[1974]: I0702 08:02:07.539009 1974 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d-host-proc-sys-kernel\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:07.717209 systemd[1]: Removed slice kubepods-burstable-pod1e7b2ef7_74b0_4040_9baa_9d4faa04f29d.slice. Jul 2 08:02:07.717402 systemd[1]: kubepods-burstable-pod1e7b2ef7_74b0_4040_9baa_9d4faa04f29d.slice: Consumed 11.666s CPU time. Jul 2 08:02:07.721754 systemd[1]: Removed slice kubepods-besteffort-pod75716295_d2d4_4548_b10e_76833afcf6c9.slice. Jul 2 08:02:07.961518 sshd[3613]: Failed password for root from 87.251.88.6 port 48742 ssh2 Jul 2 08:02:08.026901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11b4188015f9f073345d4c33e1d873791dc22ae49976fc3aace133d677b3354b-rootfs.mount: Deactivated successfully. Jul 2 08:02:08.027030 systemd[1]: var-lib-kubelet-pods-75716295\x2dd2d4\x2d4548\x2db10e\x2d76833afcf6c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljqsb.mount: Deactivated successfully. Jul 2 08:02:08.027099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59ce3dd524d7668987706f65da745d975a8df1f88113519685ab8b4e66c4623-rootfs.mount: Deactivated successfully. Jul 2 08:02:08.027203 systemd[1]: var-lib-kubelet-pods-1e7b2ef7\x2d74b0\x2d4040\x2d9baa\x2d9d4faa04f29d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqgwhs.mount: Deactivated successfully. Jul 2 08:02:08.027287 systemd[1]: var-lib-kubelet-pods-1e7b2ef7\x2d74b0\x2d4040\x2d9baa\x2d9d4faa04f29d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:02:08.027367 systemd[1]: var-lib-kubelet-pods-1e7b2ef7\x2d74b0\x2d4040\x2d9baa\x2d9d4faa04f29d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:02:08.297136 kubelet[1974]: I0702 08:02:08.296983 1974 scope.go:117] "RemoveContainer" containerID="aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1" Jul 2 08:02:08.299748 env[1191]: time="2024-07-02T08:02:08.299636192Z" level=info msg="RemoveContainer for \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\"" Jul 2 08:02:08.315795 env[1191]: time="2024-07-02T08:02:08.315591790Z" level=info msg="RemoveContainer for \"aaa9b09633b5fd34ee61d5a6134171d084e5cf12a75360097514334bb5e901b1\" returns successfully" Jul 2 08:02:08.316977 kubelet[1974]: I0702 08:02:08.316940 1974 scope.go:117] "RemoveContainer" containerID="1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6" Jul 2 08:02:08.318691 env[1191]: time="2024-07-02T08:02:08.318630128Z" level=info msg="RemoveContainer for \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\"" Jul 2 08:02:08.325050 env[1191]: time="2024-07-02T08:02:08.324065040Z" level=info msg="RemoveContainer for \"1c606c5213c35b05d9e6bf6793ac673c9caae3e9e74cca5c647b0307022ec5a6\" returns successfully" Jul 2 08:02:08.325571 kubelet[1974]: I0702 08:02:08.325529 1974 scope.go:117] "RemoveContainer" containerID="e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b" Jul 2 08:02:08.328997 env[1191]: time="2024-07-02T08:02:08.328902705Z" level=info msg="RemoveContainer for \"e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b\"" Jul 2 08:02:08.338968 env[1191]: time="2024-07-02T08:02:08.338903689Z" level=info msg="RemoveContainer for \"e9d1fdb1cda1868ee12b3c2e3f5b9cb0bae7275aa8d21daa8b7173355ea4980b\" returns successfully" Jul 2 08:02:08.339652 kubelet[1974]: I0702 08:02:08.339549 1974 scope.go:117] "RemoveContainer" containerID="94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693" Jul 2 08:02:08.343229 env[1191]: time="2024-07-02T08:02:08.343151717Z" level=info msg="RemoveContainer for \"94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693\"" Jul 2 08:02:08.348974 env[1191]: time="2024-07-02T08:02:08.348912171Z" level=info msg="RemoveContainer for \"94c0cbd5df97b259d0ca12e1e0bcf0d43ce78a15ef51d1c06eeb4a087f306693\" returns successfully" Jul 2 08:02:08.352619 kubelet[1974]: I0702 08:02:08.349410 1974 scope.go:117] "RemoveContainer" containerID="b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d" Jul 2 08:02:08.354244 env[1191]: time="2024-07-02T08:02:08.353647056Z" level=info msg="RemoveContainer for \"b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d\"" Jul 2 08:02:08.359293 env[1191]: time="2024-07-02T08:02:08.359216876Z" level=info msg="RemoveContainer for \"b6134149d2f515e303ac03a910fee2dab990fddb6d07b2af9cf2015d7159011d\" returns successfully" Jul 2 08:02:08.359762 kubelet[1974]: I0702 08:02:08.359692 1974 scope.go:117] "RemoveContainer" containerID="e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb" Jul 2 08:02:08.362096 env[1191]: time="2024-07-02T08:02:08.362055030Z" level=info msg="RemoveContainer for \"e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb\"" Jul 2 08:02:08.366914 env[1191]: time="2024-07-02T08:02:08.366861177Z" level=info msg="RemoveContainer for \"e4649c404fff417ac4b23e74d4eeb8998ab0c7df4b4e18f1b437a80c28bff9fb\" returns successfully" Jul 2 08:02:08.425349 sshd[3613]: Received disconnect from 87.251.88.6 port 48742:11: Bye Bye [preauth] Jul 2 08:02:08.425349 sshd[3613]: Disconnected from authenticating user root 87.251.88.6 port 48742 [preauth] Jul 2 08:02:08.427301 systemd[1]: sshd@29-146.190.152.6:22-87.251.88.6:48742.service: Deactivated successfully. Jul 2 08:02:08.837061 kubelet[1974]: E0702 08:02:08.836992 1974 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:02:08.921087 sshd[3605]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:08.929059 systemd[1]: Started sshd@30-146.190.152.6:22-147.75.109.163:48672.service. Jul 2 08:02:08.931496 systemd[1]: sshd@28-146.190.152.6:22-147.75.109.163:48658.service: Deactivated successfully. Jul 2 08:02:08.933031 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 08:02:08.933502 systemd[1]: session-27.scope: Consumed 1.129s CPU time. Jul 2 08:02:08.934566 systemd-logind[1177]: Session 27 logged out. Waiting for processes to exit. Jul 2 08:02:08.935838 systemd-logind[1177]: Removed session 27. Jul 2 08:02:08.987689 sshd[3768]: Accepted publickey for core from 147.75.109.163 port 48672 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:02:08.990603 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:08.998880 systemd-logind[1177]: New session 28 of user core. Jul 2 08:02:09.000071 systemd[1]: Started session-28.scope. Jul 2 08:02:09.705582 kubelet[1974]: I0702 08:02:09.705543 1974 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" path="/var/lib/kubelet/pods/1e7b2ef7-74b0-4040-9baa-9d4faa04f29d/volumes" Jul 2 08:02:09.706360 kubelet[1974]: I0702 08:02:09.706305 1974 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="75716295-d2d4-4548-b10e-76833afcf6c9" path="/var/lib/kubelet/pods/75716295-d2d4-4548-b10e-76833afcf6c9/volumes" Jul 2 08:02:09.816864 sshd[3768]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:09.824577 systemd[1]: Started sshd@31-146.190.152.6:22-147.75.109.163:48682.service. Jul 2 08:02:09.836180 systemd[1]: sshd@30-146.190.152.6:22-147.75.109.163:48672.service: Deactivated successfully. Jul 2 08:02:09.838271 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 08:02:09.839553 systemd-logind[1177]: Session 28 logged out. Waiting for processes to exit. Jul 2 08:02:09.843713 systemd-logind[1177]: Removed session 28. Jul 2 08:02:09.874830 sshd[3782]: Accepted publickey for core from 147.75.109.163 port 48682 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:02:09.878059 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:09.886804 systemd-logind[1177]: New session 29 of user core. Jul 2 08:02:09.888486 systemd[1]: Started session-29.scope. Jul 2 08:02:09.906965 kubelet[1974]: I0702 08:02:09.906897 1974 topology_manager.go:215] "Topology Admit Handler" podUID="4b432cab-2f3c-4a0f-8ace-bb2119e4b390" podNamespace="kube-system" podName="cilium-rwkkw" Jul 2 08:02:09.907351 kubelet[1974]: E0702 08:02:09.907301 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" containerName="mount-bpf-fs" Jul 2 08:02:09.907512 kubelet[1974]: E0702 08:02:09.907495 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" containerName="clean-cilium-state" Jul 2 08:02:09.907623 kubelet[1974]: E0702 08:02:09.907608 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" containerName="cilium-agent" Jul 2 08:02:09.907713 kubelet[1974]: E0702 08:02:09.907700 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" containerName="mount-cgroup" Jul 2 08:02:09.907801 kubelet[1974]: E0702 08:02:09.907788 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" containerName="apply-sysctl-overwrites" Jul 2 08:02:09.907885 kubelet[1974]: E0702 08:02:09.907873 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75716295-d2d4-4548-b10e-76833afcf6c9" containerName="cilium-operator" Jul 2 08:02:09.910998 kubelet[1974]: I0702 08:02:09.910926 1974 memory_manager.go:354] "RemoveStaleState removing state" podUID="75716295-d2d4-4548-b10e-76833afcf6c9" containerName="cilium-operator" Jul 2 08:02:09.911414 kubelet[1974]: I0702 08:02:09.911389 1974 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7b2ef7-74b0-4040-9baa-9d4faa04f29d" containerName="cilium-agent" Jul 2 08:02:09.937874 systemd[1]: Created slice kubepods-burstable-pod4b432cab_2f3c_4a0f_8ace_bb2119e4b390.slice. Jul 2 08:02:10.060679 kubelet[1974]: I0702 08:02:10.060397 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-net\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061010 kubelet[1974]: I0702 08:02:10.060981 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-cgroup\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061143 kubelet[1974]: I0702 08:02:10.061130 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hubble-tls\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061322 kubelet[1974]: I0702 08:02:10.061295 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbzj9\" (UniqueName: \"kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-kube-api-access-sbzj9\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061469 kubelet[1974]: I0702 08:02:10.061456 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cni-path\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061630 kubelet[1974]: I0702 08:02:10.061601 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-run\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061816 kubelet[1974]: I0702 08:02:10.061799 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-clustermesh-secrets\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.061962 kubelet[1974]: I0702 08:02:10.061936 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-kernel\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062033 kubelet[1974]: I0702 08:02:10.062009 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hostproc\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062455 kubelet[1974]: I0702 08:02:10.062042 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-etc-cni-netd\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062455 kubelet[1974]: I0702 08:02:10.062073 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-xtables-lock\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062455 kubelet[1974]: I0702 08:02:10.062109 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-bpf-maps\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062455 kubelet[1974]: I0702 08:02:10.062156 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-ipsec-secrets\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062455 kubelet[1974]: I0702 08:02:10.062189 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-lib-modules\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.062455 kubelet[1974]: I0702 08:02:10.062243 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-config-path\") pod \"cilium-rwkkw\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " pod="kube-system/cilium-rwkkw" Jul 2 08:02:10.125542 sshd[3782]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:10.134176 systemd[1]: Started sshd@32-146.190.152.6:22-147.75.109.163:48690.service. Jul 2 08:02:10.141467 systemd[1]: sshd@31-146.190.152.6:22-147.75.109.163:48682.service: Deactivated successfully. Jul 2 08:02:10.143059 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 08:02:10.144124 systemd-logind[1177]: Session 29 logged out. Waiting for processes to exit. Jul 2 08:02:10.145245 systemd-logind[1177]: Removed session 29. Jul 2 08:02:10.159663 kubelet[1974]: E0702 08:02:10.159568 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-sbzj9 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-rwkkw" podUID="4b432cab-2f3c-4a0f-8ace-bb2119e4b390" Jul 2 08:02:10.211561 sshd[3793]: Accepted publickey for core from 147.75.109.163 port 48690 ssh2: RSA SHA256:u5gbVVgBoVwlaeoYroSslnQZvGkd0BmVvsfiNtowBx0 Jul 2 08:02:10.214967 sshd[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:02:10.223241 systemd[1]: Started session-30.scope. Jul 2 08:02:10.223884 systemd-logind[1177]: New session 30 of user core. Jul 2 08:02:10.465443 kubelet[1974]: I0702 08:02:10.465273 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-net\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465642 kubelet[1974]: I0702 08:02:10.465559 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-cgroup\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465642 kubelet[1974]: I0702 08:02:10.465599 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-run\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465642 kubelet[1974]: I0702 08:02:10.465635 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-clustermesh-secrets\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465814 kubelet[1974]: I0702 08:02:10.465663 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-etc-cni-netd\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465814 kubelet[1974]: I0702 08:02:10.465690 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hostproc\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465814 kubelet[1974]: I0702 08:02:10.465723 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbzj9\" (UniqueName: \"kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-kube-api-access-sbzj9\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465814 kubelet[1974]: I0702 08:02:10.465761 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-kernel\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465814 kubelet[1974]: I0702 08:02:10.465782 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-ipsec-secrets\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.465814 kubelet[1974]: I0702 08:02:10.465802 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-lib-modules\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.466114 kubelet[1974]: I0702 08:02:10.465820 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hubble-tls\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.466114 kubelet[1974]: I0702 08:02:10.465845 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-xtables-lock\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.466114 kubelet[1974]: I0702 08:02:10.465871 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-bpf-maps\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.466114 kubelet[1974]: I0702 08:02:10.465895 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cni-path\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.466114 kubelet[1974]: I0702 08:02:10.465928 1974 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-config-path\") pod \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\" (UID: \"4b432cab-2f3c-4a0f-8ace-bb2119e4b390\") " Jul 2 08:02:10.468822 kubelet[1974]: I0702 08:02:10.468769 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469035 kubelet[1974]: I0702 08:02:10.465499 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469101 kubelet[1974]: I0702 08:02:10.468852 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469184 kubelet[1974]: I0702 08:02:10.468906 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469277 kubelet[1974]: I0702 08:02:10.469260 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469386 kubelet[1974]: I0702 08:02:10.469372 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469469 kubelet[1974]: I0702 08:02:10.469457 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469558 kubelet[1974]: I0702 08:02:10.469545 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cni-path" (OuterVolumeSpecName: "cni-path") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469801 kubelet[1974]: I0702 08:02:10.469770 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.469870 kubelet[1974]: I0702 08:02:10.469822 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hostproc" (OuterVolumeSpecName: "hostproc") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:02:10.470006 kubelet[1974]: I0702 08:02:10.469981 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:02:10.475677 systemd[1]: var-lib-kubelet-pods-4b432cab\x2d2f3c\x2d4a0f\x2d8ace\x2dbb2119e4b390-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsbzj9.mount: Deactivated successfully. Jul 2 08:02:10.478788 kubelet[1974]: I0702 08:02:10.478748 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-kube-api-access-sbzj9" (OuterVolumeSpecName: "kube-api-access-sbzj9") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "kube-api-access-sbzj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:02:10.479391 kubelet[1974]: I0702 08:02:10.479278 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:02:10.481485 kubelet[1974]: I0702 08:02:10.481443 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:02:10.482750 kubelet[1974]: I0702 08:02:10.482700 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4b432cab-2f3c-4a0f-8ace-bb2119e4b390" (UID: "4b432cab-2f3c-4a0f-8ace-bb2119e4b390"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:02:10.566707 kubelet[1974]: I0702 08:02:10.566648 1974 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-net\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.566991 kubelet[1974]: I0702 08:02:10.566972 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-cgroup\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567112 kubelet[1974]: I0702 08:02:10.567098 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-run\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567208 kubelet[1974]: I0702 08:02:10.567195 1974 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-clustermesh-secrets\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567304 kubelet[1974]: I0702 08:02:10.567287 1974 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-etc-cni-netd\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567565 kubelet[1974]: I0702 08:02:10.567524 1974 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sbzj9\" (UniqueName: \"kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-kube-api-access-sbzj9\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567679 kubelet[1974]: I0702 08:02:10.567663 1974 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-host-proc-sys-kernel\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567793 kubelet[1974]: I0702 08:02:10.567780 1974 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hostproc\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567896 kubelet[1974]: I0702 08:02:10.567883 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-ipsec-secrets\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.567992 kubelet[1974]: I0702 08:02:10.567977 1974 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-lib-modules\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.568193 kubelet[1974]: I0702 08:02:10.568173 1974 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-hubble-tls\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.568309 kubelet[1974]: I0702 08:02:10.568297 1974 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-xtables-lock\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.568423 kubelet[1974]: I0702 08:02:10.568410 1974 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-bpf-maps\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.568625 kubelet[1974]: I0702 08:02:10.568605 1974 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cni-path\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:10.568734 kubelet[1974]: I0702 08:02:10.568723 1974 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b432cab-2f3c-4a0f-8ace-bb2119e4b390-cilium-config-path\") on node \"ci-3510.3.5-2-fce33301fd\" DevicePath \"\"" Jul 2 08:02:11.183477 systemd[1]: var-lib-kubelet-pods-4b432cab\x2d2f3c\x2d4a0f\x2d8ace\x2dbb2119e4b390-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:02:11.183621 systemd[1]: var-lib-kubelet-pods-4b432cab\x2d2f3c\x2d4a0f\x2d8ace\x2dbb2119e4b390-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:02:11.183708 systemd[1]: var-lib-kubelet-pods-4b432cab\x2d2f3c\x2d4a0f\x2d8ace\x2dbb2119e4b390-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:02:11.317948 systemd[1]: Removed slice kubepods-burstable-pod4b432cab_2f3c_4a0f_8ace_bb2119e4b390.slice. Jul 2 08:02:11.362755 kubelet[1974]: I0702 08:02:11.362704 1974 topology_manager.go:215] "Topology Admit Handler" podUID="23abebcc-d27f-4bf7-902a-ec3023a85329" podNamespace="kube-system" podName="cilium-p2sfd" Jul 2 08:02:11.369737 systemd[1]: Created slice kubepods-burstable-pod23abebcc_d27f_4bf7_902a_ec3023a85329.slice. Jul 2 08:02:11.475016 kubelet[1974]: I0702 08:02:11.474812 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23abebcc-d27f-4bf7-902a-ec3023a85329-hubble-tls\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475016 kubelet[1974]: I0702 08:02:11.474959 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23abebcc-d27f-4bf7-902a-ec3023a85329-cilium-config-path\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475251 kubelet[1974]: I0702 08:02:11.475025 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-host-proc-sys-net\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475251 kubelet[1974]: I0702 08:02:11.475047 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-cilium-cgroup\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475251 kubelet[1974]: I0702 08:02:11.475097 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-xtables-lock\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475251 kubelet[1974]: I0702 08:02:11.475120 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdhpx\" (UniqueName: \"kubernetes.io/projected/23abebcc-d27f-4bf7-902a-ec3023a85329-kube-api-access-jdhpx\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475251 kubelet[1974]: I0702 08:02:11.475206 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-bpf-maps\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475492 kubelet[1974]: I0702 08:02:11.475276 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-hostproc\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475492 kubelet[1974]: I0702 08:02:11.475304 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-cni-path\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475492 kubelet[1974]: I0702 08:02:11.475366 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-lib-modules\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475492 kubelet[1974]: I0702 08:02:11.475402 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23abebcc-d27f-4bf7-902a-ec3023a85329-clustermesh-secrets\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475492 kubelet[1974]: I0702 08:02:11.475425 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-host-proc-sys-kernel\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475707 kubelet[1974]: I0702 08:02:11.475449 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-cilium-run\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475707 kubelet[1974]: I0702 08:02:11.475552 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/23abebcc-d27f-4bf7-902a-ec3023a85329-cilium-ipsec-secrets\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.475707 kubelet[1974]: I0702 08:02:11.475580 1974 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23abebcc-d27f-4bf7-902a-ec3023a85329-etc-cni-netd\") pod \"cilium-p2sfd\" (UID: \"23abebcc-d27f-4bf7-902a-ec3023a85329\") " pod="kube-system/cilium-p2sfd" Jul 2 08:02:11.673090 kubelet[1974]: E0702 08:02:11.673048 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:11.673849 env[1191]: time="2024-07-02T08:02:11.673806340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p2sfd,Uid:23abebcc-d27f-4bf7-902a-ec3023a85329,Namespace:kube-system,Attempt:0,}" Jul 2 08:02:11.706063 kubelet[1974]: I0702 08:02:11.706022 1974 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4b432cab-2f3c-4a0f-8ace-bb2119e4b390" path="/var/lib/kubelet/pods/4b432cab-2f3c-4a0f-8ace-bb2119e4b390/volumes" Jul 2 08:02:11.711463 env[1191]: time="2024-07-02T08:02:11.711358966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:02:11.711463 env[1191]: time="2024-07-02T08:02:11.711413062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:02:11.711463 env[1191]: time="2024-07-02T08:02:11.711425388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:02:11.712104 env[1191]: time="2024-07-02T08:02:11.712037274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3 pid=3823 runtime=io.containerd.runc.v2 Jul 2 08:02:11.739084 systemd[1]: Started cri-containerd-3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3.scope. Jul 2 08:02:11.776301 env[1191]: time="2024-07-02T08:02:11.776242675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p2sfd,Uid:23abebcc-d27f-4bf7-902a-ec3023a85329,Namespace:kube-system,Attempt:0,} returns sandbox id \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\"" Jul 2 08:02:11.777262 kubelet[1974]: E0702 08:02:11.777236 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:11.782452 env[1191]: time="2024-07-02T08:02:11.781729565Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:02:11.805041 env[1191]: time="2024-07-02T08:02:11.804888504Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b\"" Jul 2 08:02:11.806059 env[1191]: time="2024-07-02T08:02:11.805804625Z" level=info msg="StartContainer for \"1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b\"" Jul 2 08:02:11.828062 systemd[1]: Started cri-containerd-1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b.scope. Jul 2 08:02:11.881908 env[1191]: time="2024-07-02T08:02:11.881840819Z" level=info msg="StartContainer for \"1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b\" returns successfully" Jul 2 08:02:11.897488 systemd[1]: cri-containerd-1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b.scope: Deactivated successfully. Jul 2 08:02:11.976950 env[1191]: time="2024-07-02T08:02:11.976882790Z" level=info msg="shim disconnected" id=1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b Jul 2 08:02:11.977401 env[1191]: time="2024-07-02T08:02:11.977368565Z" level=warning msg="cleaning up after shim disconnected" id=1cf6a5269c39aa45e6ede6ca9206132fc979c749bd853e867ec6823fa068f59b namespace=k8s.io Jul 2 08:02:11.977886 env[1191]: time="2024-07-02T08:02:11.977859651Z" level=info msg="cleaning up dead shim" Jul 2 08:02:11.989474 env[1191]: time="2024-07-02T08:02:11.989396704Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3907 runtime=io.containerd.runc.v2\n" Jul 2 08:02:12.317388 kubelet[1974]: E0702 08:02:12.316739 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:12.322145 env[1191]: time="2024-07-02T08:02:12.320862520Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:02:12.365903 env[1191]: time="2024-07-02T08:02:12.365831988Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b\"" Jul 2 08:02:12.367311 env[1191]: time="2024-07-02T08:02:12.367265614Z" level=info msg="StartContainer for \"7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b\"" Jul 2 08:02:12.402626 systemd[1]: Started cri-containerd-7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b.scope. Jul 2 08:02:12.455097 env[1191]: time="2024-07-02T08:02:12.454997944Z" level=info msg="StartContainer for \"7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b\" returns successfully" Jul 2 08:02:12.504001 systemd[1]: cri-containerd-7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b.scope: Deactivated successfully. Jul 2 08:02:12.556295 env[1191]: time="2024-07-02T08:02:12.556231786Z" level=info msg="shim disconnected" id=7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b Jul 2 08:02:12.556807 env[1191]: time="2024-07-02T08:02:12.556771323Z" level=warning msg="cleaning up after shim disconnected" id=7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b namespace=k8s.io Jul 2 08:02:12.556935 env[1191]: time="2024-07-02T08:02:12.556916966Z" level=info msg="cleaning up dead shim" Jul 2 08:02:12.579584 env[1191]: time="2024-07-02T08:02:12.579521306Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" Jul 2 08:02:13.183858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7041ec8dcb7f215945684f6cfaf61530184172bb36a4d1e9675213c2ecae643b-rootfs.mount: Deactivated successfully. Jul 2 08:02:13.321646 kubelet[1974]: E0702 08:02:13.321609 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:13.324439 env[1191]: time="2024-07-02T08:02:13.324389578Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:02:13.355343 env[1191]: time="2024-07-02T08:02:13.355238547Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d\"" Jul 2 08:02:13.356311 env[1191]: time="2024-07-02T08:02:13.356267917Z" level=info msg="StartContainer for \"d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d\"" Jul 2 08:02:13.390970 systemd[1]: Started cri-containerd-d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d.scope. Jul 2 08:02:13.446054 systemd[1]: cri-containerd-d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d.scope: Deactivated successfully. Jul 2 08:02:13.448030 env[1191]: time="2024-07-02T08:02:13.447970207Z" level=info msg="StartContainer for \"d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d\" returns successfully" Jul 2 08:02:13.488024 env[1191]: time="2024-07-02T08:02:13.487965333Z" level=info msg="shim disconnected" id=d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d Jul 2 08:02:13.488491 env[1191]: time="2024-07-02T08:02:13.488464192Z" level=warning msg="cleaning up after shim disconnected" id=d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d namespace=k8s.io Jul 2 08:02:13.488793 env[1191]: time="2024-07-02T08:02:13.488621585Z" level=info msg="cleaning up dead shim" Jul 2 08:02:13.506504 env[1191]: time="2024-07-02T08:02:13.506451892Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4028 runtime=io.containerd.runc.v2\n" Jul 2 08:02:13.838372 kubelet[1974]: E0702 08:02:13.838312 1974 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:02:14.184232 systemd[1]: run-containerd-runc-k8s.io-d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d-runc.gri32I.mount: Deactivated successfully. Jul 2 08:02:14.184393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d47cb71eb191bf69a7e3261627f9d8fa62fe1449af069207e1d6038bf0f62a1d-rootfs.mount: Deactivated successfully. Jul 2 08:02:14.327938 kubelet[1974]: E0702 08:02:14.326890 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:14.335986 env[1191]: time="2024-07-02T08:02:14.335890452Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:02:14.368528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357702962.mount: Deactivated successfully. Jul 2 08:02:14.378009 env[1191]: time="2024-07-02T08:02:14.377954288Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a\"" Jul 2 08:02:14.379288 env[1191]: time="2024-07-02T08:02:14.379251474Z" level=info msg="StartContainer for \"70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a\"" Jul 2 08:02:14.413228 systemd[1]: Started cri-containerd-70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a.scope. Jul 2 08:02:14.457059 systemd[1]: cri-containerd-70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a.scope: Deactivated successfully. Jul 2 08:02:14.459628 env[1191]: time="2024-07-02T08:02:14.459540815Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23abebcc_d27f_4bf7_902a_ec3023a85329.slice/cri-containerd-70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a.scope/memory.events\": no such file or directory" Jul 2 08:02:14.466753 env[1191]: time="2024-07-02T08:02:14.466550218Z" level=info msg="StartContainer for \"70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a\" returns successfully" Jul 2 08:02:14.511630 env[1191]: time="2024-07-02T08:02:14.505127894Z" level=info msg="shim disconnected" id=70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a Jul 2 08:02:14.511630 env[1191]: time="2024-07-02T08:02:14.505180297Z" level=warning msg="cleaning up after shim disconnected" id=70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a namespace=k8s.io Jul 2 08:02:14.511630 env[1191]: time="2024-07-02T08:02:14.505189424Z" level=info msg="cleaning up dead shim" Jul 2 08:02:14.524767 env[1191]: time="2024-07-02T08:02:14.524725471Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:02:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4085 runtime=io.containerd.runc.v2\n" Jul 2 08:02:15.185221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70bbfdfb911408c79918888c25ac0d4adcf8c309a91e055e6776aa7afbd6919a-rootfs.mount: Deactivated successfully. Jul 2 08:02:15.332057 kubelet[1974]: E0702 08:02:15.332020 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:15.337522 env[1191]: time="2024-07-02T08:02:15.336597091Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:02:15.359548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276249313.mount: Deactivated successfully. Jul 2 08:02:15.374292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037057844.mount: Deactivated successfully. Jul 2 08:02:15.378123 env[1191]: time="2024-07-02T08:02:15.378047149Z" level=info msg="CreateContainer within sandbox \"3434df07289cbe9e799ae682731468323a89639a741278399ee99d7ef4068ef3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592\"" Jul 2 08:02:15.380535 env[1191]: time="2024-07-02T08:02:15.379632091Z" level=info msg="StartContainer for \"3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592\"" Jul 2 08:02:15.401787 systemd[1]: Started cri-containerd-3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592.scope. Jul 2 08:02:15.453144 env[1191]: time="2024-07-02T08:02:15.451635658Z" level=info msg="StartContainer for \"3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592\" returns successfully" Jul 2 08:02:16.030412 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 08:02:16.347523 kubelet[1974]: E0702 08:02:16.347494 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:16.670360 systemd[1]: run-containerd-runc-k8s.io-3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592-runc.etHaua.mount: Deactivated successfully. Jul 2 08:02:16.671825 kubelet[1974]: I0702 08:02:16.670842 1974 setters.go:568] "Node became not ready" node="ci-3510.3.5-2-fce33301fd" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:02:16Z","lastTransitionTime":"2024-07-02T08:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:02:17.676100 kubelet[1974]: E0702 08:02:17.676039 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:18.906268 systemd[1]: run-containerd-runc-k8s.io-3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592-runc.Q9BWFk.mount: Deactivated successfully. Jul 2 08:02:19.745847 systemd-networkd[998]: lxc_health: Link UP Jul 2 08:02:19.778429 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:02:19.781179 systemd-networkd[998]: lxc_health: Gained carrier Jul 2 08:02:20.075014 systemd[1]: Started sshd@33-146.190.152.6:22-95.237.101.14:42322.service. Jul 2 08:02:20.705146 kubelet[1974]: E0702 08:02:20.703376 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:20.960712 systemd-networkd[998]: lxc_health: Gained IPv6LL Jul 2 08:02:21.175242 systemd[1]: run-containerd-runc-k8s.io-3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592-runc.r4uLXV.mount: Deactivated successfully. Jul 2 08:02:21.678045 kubelet[1974]: E0702 08:02:21.677990 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:21.734533 kubelet[1974]: I0702 08:02:21.734466 1974 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-p2sfd" podStartSLOduration=10.734392231 podStartE2EDuration="10.734392231s" podCreationTimestamp="2024-07-02 08:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:02:16.373157529 +0000 UTC m=+152.994689528" watchObservedRunningTime="2024-07-02 08:02:21.734392231 +0000 UTC m=+158.355924246" Jul 2 08:02:22.364773 kubelet[1974]: E0702 08:02:22.364724 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:23.368156 kubelet[1974]: E0702 08:02:23.368042 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 08:02:23.552874 systemd[1]: run-containerd-runc-k8s.io-3ac90a37b32a1830310f101f080afe7d3aa8d93e41fff479e38481c512712592-runc.Y2e6D7.mount: Deactivated successfully. Jul 2 08:02:25.934508 sshd[3793]: pam_unix(sshd:session): session closed for user core Jul 2 08:02:25.938699 systemd[1]: sshd@32-146.190.152.6:22-147.75.109.163:48690.service: Deactivated successfully. Jul 2 08:02:25.939871 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 08:02:25.941129 systemd-logind[1177]: Session 30 logged out. Waiting for processes to exit. Jul 2 08:02:25.942657 systemd-logind[1177]: Removed session 30.