Oct 2 19:44:55.786171 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:44:55.786189 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:44:55.786197 kernel: BIOS-provided physical RAM map: Oct 2 19:44:55.786202 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:44:55.786208 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:44:55.786213 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:44:55.786219 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Oct 2 19:44:55.786225 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Oct 2 19:44:55.786232 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:44:55.786237 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:44:55.786243 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 2 19:44:55.786248 kernel: NX (Execute Disable) protection: active Oct 2 19:44:55.786253 kernel: SMBIOS 2.8 present. Oct 2 19:44:55.786259 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 2 19:44:55.786267 kernel: Hypervisor detected: KVM Oct 2 19:44:55.786273 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:44:55.786279 kernel: kvm-clock: cpu 0, msr 25f8a001, primary cpu clock Oct 2 19:44:55.786285 kernel: kvm-clock: using sched offset of 2142371293 cycles Oct 2 19:44:55.786291 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:44:55.786297 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:44:55.786304 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:44:55.786310 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:44:55.786316 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Oct 2 19:44:55.786323 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:44:55.786330 kernel: Using GB pages for direct mapping Oct 2 19:44:55.786336 kernel: ACPI: Early table checksum verification disabled Oct 2 19:44:55.786342 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Oct 2 19:44:55.786348 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:44:55.786354 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:44:55.786360 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:44:55.786366 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 2 19:44:55.786372 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:44:55.786379 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:44:55.786385 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:44:55.786391 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Oct 2 19:44:55.786397 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Oct 2 19:44:55.786403 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 2 19:44:55.786409 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Oct 2 19:44:55.786415 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Oct 2 19:44:55.786421 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Oct 2 19:44:55.786430 kernel: No NUMA configuration found Oct 2 19:44:55.786437 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Oct 2 19:44:55.786448 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Oct 2 19:44:55.786476 kernel: Zone ranges: Oct 2 19:44:55.786483 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:44:55.786489 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Oct 2 19:44:55.786498 kernel: Normal empty Oct 2 19:44:55.786507 kernel: Movable zone start for each node Oct 2 19:44:55.786514 kernel: Early memory node ranges Oct 2 19:44:55.786520 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:44:55.786526 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Oct 2 19:44:55.786533 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Oct 2 19:44:55.786539 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:44:55.786552 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:44:55.786558 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Oct 2 19:44:55.786566 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:44:55.786572 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:44:55.786579 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:44:55.786585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:44:55.786592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:44:55.786598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:44:55.786605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:44:55.786611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:44:55.786617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:44:55.786625 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:44:55.786631 kernel: TSC deadline timer available Oct 2 19:44:55.786637 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:44:55.786644 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:44:55.786650 kernel: kvm-guest: setup PV sched yield Oct 2 19:44:55.786656 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Oct 2 19:44:55.786663 kernel: Booting paravirtualized kernel on KVM Oct 2 19:44:55.786669 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:44:55.786676 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:44:55.786684 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:44:55.786690 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:44:55.786696 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:44:55.786702 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:44:55.786709 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 2 19:44:55.786715 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:44:55.786722 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:44:55.786728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Oct 2 19:44:55.786734 kernel: Policy zone: DMA32 Oct 2 19:44:55.786742 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:44:55.786750 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:44:55.786757 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:44:55.786763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:44:55.786770 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:44:55.786776 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 132728K reserved, 0K cma-reserved) Oct 2 19:44:55.786798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:44:55.786805 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:44:55.786811 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:44:55.786819 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:44:55.786826 kernel: rcu: RCU event tracing is enabled. Oct 2 19:44:55.786833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:44:55.786839 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:44:55.786845 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:44:55.786852 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:44:55.786858 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:44:55.786865 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:44:55.786871 kernel: random: crng init done Oct 2 19:44:55.786879 kernel: Console: colour VGA+ 80x25 Oct 2 19:44:55.786885 kernel: printk: console [ttyS0] enabled Oct 2 19:44:55.786892 kernel: ACPI: Core revision 20210730 Oct 2 19:44:55.786898 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:44:55.786905 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:44:55.786911 kernel: x2apic enabled Oct 2 19:44:55.786918 kernel: Switched APIC routing to physical x2apic. Oct 2 19:44:55.786924 kernel: kvm-guest: setup PV IPIs Oct 2 19:44:55.786931 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:44:55.786938 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:44:55.786945 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:44:55.786951 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:44:55.786958 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:44:55.786964 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:44:55.786970 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:44:55.786977 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:44:55.786983 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:44:55.786990 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:44:55.787003 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:44:55.787010 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:44:55.787018 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:44:55.787025 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:44:55.787031 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:44:55.787038 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:44:55.787045 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:44:55.787052 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:44:55.787059 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:44:55.787067 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:44:55.787073 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:44:55.787080 kernel: LSM: Security Framework initializing Oct 2 19:44:55.787087 kernel: SELinux: Initializing. Oct 2 19:44:55.787094 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:44:55.787100 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:44:55.787107 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:44:55.787115 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:44:55.787122 kernel: ... version: 0 Oct 2 19:44:55.787128 kernel: ... bit width: 48 Oct 2 19:44:55.787135 kernel: ... generic registers: 6 Oct 2 19:44:55.787142 kernel: ... value mask: 0000ffffffffffff Oct 2 19:44:55.787149 kernel: ... max period: 00007fffffffffff Oct 2 19:44:55.787155 kernel: ... fixed-purpose events: 0 Oct 2 19:44:55.787162 kernel: ... event mask: 000000000000003f Oct 2 19:44:55.787169 kernel: signal: max sigframe size: 1776 Oct 2 19:44:55.787177 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:44:55.787183 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:44:55.787190 kernel: x86: Booting SMP configuration: Oct 2 19:44:55.787197 kernel: .... node #0, CPUs: #1 Oct 2 19:44:55.787204 kernel: kvm-clock: cpu 1, msr 25f8a041, secondary cpu clock Oct 2 19:44:55.787210 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:44:55.787217 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 2 19:44:55.787224 kernel: #2 Oct 2 19:44:55.787231 kernel: kvm-clock: cpu 2, msr 25f8a081, secondary cpu clock Oct 2 19:44:55.787237 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:44:55.787245 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 2 19:44:55.787252 kernel: #3 Oct 2 19:44:55.787259 kernel: kvm-clock: cpu 3, msr 25f8a0c1, secondary cpu clock Oct 2 19:44:55.787265 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:44:55.787272 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 2 19:44:55.787279 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:44:55.787285 kernel: smpboot: Max logical packages: 1 Oct 2 19:44:55.787292 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:44:55.787299 kernel: devtmpfs: initialized Oct 2 19:44:55.787307 kernel: x86/mm: Memory block size: 128MB Oct 2 19:44:55.787314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:44:55.787321 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:44:55.787328 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:44:55.787335 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:44:55.787341 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:44:55.787348 kernel: audit: type=2000 audit(1696275895.202:1): state=initialized audit_enabled=0 res=1 Oct 2 19:44:55.787355 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:44:55.787361 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:44:55.787369 kernel: cpuidle: using governor menu Oct 2 19:44:55.787376 kernel: ACPI: bus type PCI registered Oct 2 19:44:55.787383 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:44:55.787389 kernel: dca service started, version 1.12.1 Oct 2 19:44:55.787396 kernel: PCI: Using configuration type 1 for base access Oct 2 19:44:55.787403 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:44:55.787410 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:44:55.787417 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:44:55.787423 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:44:55.787432 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:44:55.787438 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:44:55.787445 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:44:55.787452 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:44:55.787458 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:44:55.787465 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:44:55.787472 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:44:55.787479 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:44:55.787485 kernel: ACPI: Interpreter enabled Oct 2 19:44:55.787493 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:44:55.787500 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:44:55.787506 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:44:55.787513 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:44:55.787520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:44:55.787638 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:44:55.787649 kernel: acpiphp: Slot [3] registered Oct 2 19:44:55.787656 kernel: acpiphp: Slot [4] registered Oct 2 19:44:55.787664 kernel: acpiphp: Slot [5] registered Oct 2 19:44:55.787671 kernel: acpiphp: Slot [6] registered Oct 2 19:44:55.787678 kernel: acpiphp: Slot [7] registered Oct 2 19:44:55.787684 kernel: acpiphp: Slot [8] registered Oct 2 19:44:55.787691 kernel: acpiphp: Slot [9] registered Oct 2 19:44:55.787697 kernel: acpiphp: Slot [10] registered Oct 2 19:44:55.787704 kernel: acpiphp: Slot [11] registered Oct 2 19:44:55.787711 kernel: acpiphp: Slot [12] registered Oct 2 19:44:55.787718 kernel: acpiphp: Slot [13] registered Oct 2 19:44:55.787724 kernel: acpiphp: Slot [14] registered Oct 2 19:44:55.787732 kernel: acpiphp: Slot [15] registered Oct 2 19:44:55.787739 kernel: acpiphp: Slot [16] registered Oct 2 19:44:55.787746 kernel: acpiphp: Slot [17] registered Oct 2 19:44:55.787752 kernel: acpiphp: Slot [18] registered Oct 2 19:44:55.787759 kernel: acpiphp: Slot [19] registered Oct 2 19:44:55.787765 kernel: acpiphp: Slot [20] registered Oct 2 19:44:55.787772 kernel: acpiphp: Slot [21] registered Oct 2 19:44:55.787779 kernel: acpiphp: Slot [22] registered Oct 2 19:44:55.787813 kernel: acpiphp: Slot [23] registered Oct 2 19:44:55.787822 kernel: acpiphp: Slot [24] registered Oct 2 19:44:55.787828 kernel: acpiphp: Slot [25] registered Oct 2 19:44:55.787835 kernel: acpiphp: Slot [26] registered Oct 2 19:44:55.787841 kernel: acpiphp: Slot [27] registered Oct 2 19:44:55.787848 kernel: acpiphp: Slot [28] registered Oct 2 19:44:55.787855 kernel: acpiphp: Slot [29] registered Oct 2 19:44:55.787861 kernel: acpiphp: Slot [30] registered Oct 2 19:44:55.787868 kernel: acpiphp: Slot [31] registered Oct 2 19:44:55.787875 kernel: PCI host bridge to bus 0000:00 Oct 2 19:44:55.787953 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:44:55.788017 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:44:55.788077 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:44:55.788136 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:44:55.788195 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:44:55.788254 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:44:55.788332 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:44:55.788411 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:44:55.788491 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:44:55.788568 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:44:55.788641 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:44:55.788709 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:44:55.788777 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:44:55.788856 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:44:55.788934 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:44:55.789001 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:44:55.789067 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:44:55.789141 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:44:55.789208 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 2 19:44:55.789273 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 2 19:44:55.789343 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 2 19:44:55.789409 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:44:55.789484 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:44:55.789559 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:44:55.789630 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 2 19:44:55.789699 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 2 19:44:55.789772 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:44:55.789879 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:44:55.789947 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 2 19:44:55.790013 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 2 19:44:55.790087 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:44:55.790154 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:44:55.790221 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 2 19:44:55.790290 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 2 19:44:55.790360 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 2 19:44:55.790369 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:44:55.790376 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:44:55.790383 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:44:55.790390 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:44:55.790396 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:44:55.790403 kernel: iommu: Default domain type: Translated Oct 2 19:44:55.790410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:44:55.790477 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:44:55.790554 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:44:55.790624 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:44:55.790632 kernel: vgaarb: loaded Oct 2 19:44:55.790639 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:44:55.790646 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:44:55.790653 kernel: PTP clock support registered Oct 2 19:44:55.790660 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:44:55.790667 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:44:55.790676 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:44:55.790683 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Oct 2 19:44:55.790689 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:44:55.790696 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:44:55.790703 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:44:55.790709 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:44:55.790716 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:44:55.790723 kernel: pnp: PnP ACPI init Oct 2 19:44:55.790806 kernel: pnp 00:02: [dma 2] Oct 2 19:44:55.790841 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:44:55.790848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:44:55.790855 kernel: NET: Registered PF_INET protocol family Oct 2 19:44:55.790862 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:44:55.790869 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:44:55.790876 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:44:55.790882 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:44:55.790890 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:44:55.790898 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:44:55.790905 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:44:55.790912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:44:55.790918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:44:55.790925 kernel: NET: Registered PF_XDP protocol family Oct 2 19:44:55.791005 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:44:55.791079 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:44:55.791153 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:44:55.791242 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:44:55.791321 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:44:55.791405 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:44:55.791488 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:44:55.791577 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:44:55.791587 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:44:55.791594 kernel: Initialise system trusted keyrings Oct 2 19:44:55.791600 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:44:55.791607 kernel: Key type asymmetric registered Oct 2 19:44:55.791616 kernel: Asymmetric key parser 'x509' registered Oct 2 19:44:55.791623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:44:55.791630 kernel: io scheduler mq-deadline registered Oct 2 19:44:55.791636 kernel: io scheduler kyber registered Oct 2 19:44:55.791643 kernel: io scheduler bfq registered Oct 2 19:44:55.791662 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:44:55.791669 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:44:55.791676 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:44:55.791683 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:44:55.791691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:44:55.791698 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:44:55.791705 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:44:55.791712 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:44:55.791718 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:44:55.791725 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:44:55.791890 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:44:55.791971 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:44:55.792050 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:44:55 UTC (1696275895) Oct 2 19:44:55.792141 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:44:55.792150 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:44:55.792157 kernel: Segment Routing with IPv6 Oct 2 19:44:55.792164 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:44:55.792182 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:44:55.792189 kernel: Key type dns_resolver registered Oct 2 19:44:55.792196 kernel: IPI shorthand broadcast: enabled Oct 2 19:44:55.792203 kernel: sched_clock: Marking stable (354379761, 70916003)->(433258604, -7962840) Oct 2 19:44:55.792212 kernel: registered taskstats version 1 Oct 2 19:44:55.792232 kernel: Loading compiled-in X.509 certificates Oct 2 19:44:55.792243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:44:55.792250 kernel: Key type .fscrypt registered Oct 2 19:44:55.792257 kernel: Key type fscrypt-provisioning registered Oct 2 19:44:55.792264 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:44:55.792271 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:44:55.792284 kernel: ima: No architecture policies found Oct 2 19:44:55.792303 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:44:55.792310 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:44:55.792317 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:44:55.792324 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:44:55.792338 kernel: Run /init as init process Oct 2 19:44:55.792349 kernel: with arguments: Oct 2 19:44:55.792356 kernel: /init Oct 2 19:44:55.792363 kernel: with environment: Oct 2 19:44:55.792390 kernel: HOME=/ Oct 2 19:44:55.792400 kernel: TERM=linux Oct 2 19:44:55.792408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:44:55.792417 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:44:55.792427 systemd[1]: Detected virtualization kvm. Oct 2 19:44:55.792434 systemd[1]: Detected architecture x86-64. Oct 2 19:44:55.792453 systemd[1]: Running in initrd. Oct 2 19:44:55.792461 systemd[1]: No hostname configured, using default hostname. Oct 2 19:44:55.792468 systemd[1]: Hostname set to . Oct 2 19:44:55.792477 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:44:55.792485 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:44:55.792492 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:44:55.792500 systemd[1]: Reached target cryptsetup.target. Oct 2 19:44:55.792519 systemd[1]: Reached target paths.target. Oct 2 19:44:55.792527 systemd[1]: Reached target slices.target. Oct 2 19:44:55.792534 systemd[1]: Reached target swap.target. Oct 2 19:44:55.792548 systemd[1]: Reached target timers.target. Oct 2 19:44:55.792558 systemd[1]: Listening on iscsid.socket. Oct 2 19:44:55.792577 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:44:55.792585 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:44:55.792592 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:44:55.792600 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:44:55.792607 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:44:55.792615 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:44:55.792632 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:44:55.792643 systemd[1]: Reached target sockets.target. Oct 2 19:44:55.792651 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:44:55.792658 systemd[1]: Finished network-cleanup.service. Oct 2 19:44:55.792666 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:44:55.792673 systemd[1]: Starting systemd-journald.service... Oct 2 19:44:55.792693 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:44:55.792702 systemd[1]: Starting systemd-resolved.service... Oct 2 19:44:55.792710 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:44:55.792717 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:44:55.792732 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:44:55.792743 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:44:55.792754 systemd-journald[196]: Journal started Oct 2 19:44:55.792814 systemd-journald[196]: Runtime Journal (/run/log/journal/2806a812693d422ea44d06fe612e82fc) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:44:55.784667 systemd-modules-load[197]: Inserted module 'overlay' Oct 2 19:44:55.817227 systemd[1]: Started systemd-journald.service. Oct 2 19:44:55.817250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:44:55.817266 kernel: Bridge firewalling registered Oct 2 19:44:55.817276 kernel: audit: type=1130 audit(1696275895.813:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.799415 systemd-resolved[198]: Positive Trust Anchors: Oct 2 19:44:55.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.799422 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:44:55.799448 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:44:55.801528 systemd-resolved[198]: Defaulting to hostname 'linux'. Oct 2 19:44:55.805600 systemd-modules-load[197]: Inserted module 'br_netfilter' Oct 2 19:44:55.817175 systemd[1]: Started systemd-resolved.service. Oct 2 19:44:55.818352 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:44:55.820814 kernel: audit: type=1130 audit(1696275895.817:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.826699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:44:55.829861 kernel: audit: type=1130 audit(1696275895.825:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.829875 kernel: SCSI subsystem initialized Oct 2 19:44:55.829884 kernel: audit: type=1130 audit(1696275895.829:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.829971 systemd[1]: Reached target nss-lookup.target. Oct 2 19:44:55.833359 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:44:55.839095 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:44:55.839124 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:44:55.839920 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:44:55.842525 systemd-modules-load[197]: Inserted module 'dm_multipath' Oct 2 19:44:55.843196 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:44:55.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.843896 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:44:55.845802 kernel: audit: type=1130 audit(1696275895.842:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.850584 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:44:55.853954 kernel: audit: type=1130 audit(1696275895.850:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.851817 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:44:55.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.856827 kernel: audit: type=1130 audit(1696275895.853:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.853204 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:44:55.863344 dracut-cmdline[220]: dracut-dracut-053 Oct 2 19:44:55.864979 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:44:55.914803 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:44:55.924809 kernel: iscsi: registered transport (tcp) Oct 2 19:44:55.943812 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:44:55.943836 kernel: QLogic iSCSI HBA Driver Oct 2 19:44:55.969606 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:44:55.972473 kernel: audit: type=1130 audit(1696275895.968:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:55.972482 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:44:56.017811 kernel: raid6: avx2x4 gen() 30975 MB/s Oct 2 19:44:56.034797 kernel: raid6: avx2x4 xor() 8371 MB/s Oct 2 19:44:56.051795 kernel: raid6: avx2x2 gen() 32852 MB/s Oct 2 19:44:56.068797 kernel: raid6: avx2x2 xor() 19369 MB/s Oct 2 19:44:56.085799 kernel: raid6: avx2x1 gen() 26772 MB/s Oct 2 19:44:56.102796 kernel: raid6: avx2x1 xor() 15451 MB/s Oct 2 19:44:56.119796 kernel: raid6: sse2x4 gen() 14915 MB/s Oct 2 19:44:56.136799 kernel: raid6: sse2x4 xor() 7639 MB/s Oct 2 19:44:56.153795 kernel: raid6: sse2x2 gen() 16554 MB/s Oct 2 19:44:56.170799 kernel: raid6: sse2x2 xor() 9885 MB/s Oct 2 19:44:56.187796 kernel: raid6: sse2x1 gen() 12410 MB/s Oct 2 19:44:56.205136 kernel: raid6: sse2x1 xor() 7839 MB/s Oct 2 19:44:56.205148 kernel: raid6: using algorithm avx2x2 gen() 32852 MB/s Oct 2 19:44:56.205161 kernel: raid6: .... xor() 19369 MB/s, rmw enabled Oct 2 19:44:56.205169 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:44:56.216805 kernel: xor: automatically using best checksumming function avx Oct 2 19:44:56.304810 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:44:56.311696 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:44:56.314544 kernel: audit: type=1130 audit(1696275896.310:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:56.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:56.314000 audit: BPF prog-id=7 op=LOAD Oct 2 19:44:56.314000 audit: BPF prog-id=8 op=LOAD Oct 2 19:44:56.314975 systemd[1]: Starting systemd-udevd.service... Oct 2 19:44:56.326360 systemd-udevd[397]: Using default interface naming scheme 'v252'. Oct 2 19:44:56.330588 systemd[1]: Started systemd-udevd.service. Oct 2 19:44:56.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:56.331298 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:44:56.340877 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Oct 2 19:44:56.362470 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:44:56.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:56.363634 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:44:56.394992 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:44:56.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:56.420810 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:44:56.422813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:44:56.423807 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:44:56.434807 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:44:56.434827 kernel: libata version 3.00 loaded. Oct 2 19:44:56.436806 kernel: AES CTR mode by8 optimization enabled Oct 2 19:44:56.437819 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:44:56.441860 kernel: scsi host0: ata_piix Oct 2 19:44:56.441978 kernel: scsi host1: ata_piix Oct 2 19:44:56.442059 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:44:56.442069 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:44:56.453814 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Oct 2 19:44:56.456906 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:44:56.478575 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:44:56.482489 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:44:56.483291 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:44:56.487760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:44:56.493669 systemd[1]: Starting disk-uuid.service... Oct 2 19:44:56.502800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:44:56.506808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:44:56.600811 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:44:56.600878 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:44:56.628810 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:44:56.628958 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:44:56.645809 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:44:57.511808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:44:57.512311 disk-uuid[518]: The operation has completed successfully. Oct 2 19:44:57.536419 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:44:57.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.536508 systemd[1]: Finished disk-uuid.service. Oct 2 19:44:57.538710 systemd[1]: Starting verity-setup.service... Oct 2 19:44:57.549806 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:44:57.577671 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:44:57.579305 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:44:57.580499 systemd[1]: Finished verity-setup.service. Oct 2 19:44:57.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.644808 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:44:57.645057 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:44:57.645212 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:44:57.646022 systemd[1]: Starting ignition-setup.service... Oct 2 19:44:57.647883 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:44:57.656220 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:44:57.656259 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:44:57.656273 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:44:57.663100 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:44:57.671093 systemd[1]: Finished ignition-setup.service. Oct 2 19:44:57.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.672432 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:44:57.706476 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:44:57.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.706000 audit: BPF prog-id=9 op=LOAD Oct 2 19:44:57.708103 systemd[1]: Starting systemd-networkd.service... Oct 2 19:44:57.726269 systemd-networkd[692]: lo: Link UP Oct 2 19:44:57.726277 systemd-networkd[692]: lo: Gained carrier Oct 2 19:44:57.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.726633 systemd-networkd[692]: Enumeration completed Oct 2 19:44:57.726707 systemd[1]: Started systemd-networkd.service. Oct 2 19:44:57.726808 systemd-networkd[692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:44:57.727812 systemd-networkd[692]: eth0: Link UP Oct 2 19:44:57.727815 systemd-networkd[692]: eth0: Gained carrier Oct 2 19:44:57.728467 systemd[1]: Reached target network.target. Oct 2 19:44:57.733658 systemd[1]: Starting iscsiuio.service... Oct 2 19:44:57.738662 systemd[1]: Started iscsiuio.service. Oct 2 19:44:57.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.740385 systemd[1]: Starting iscsid.service... Oct 2 19:44:57.744156 iscsid[702]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:44:57.744156 iscsid[702]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:44:57.744156 iscsid[702]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:44:57.744156 iscsid[702]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:44:57.744156 iscsid[702]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:44:57.744156 iscsid[702]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:44:57.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.747827 systemd[1]: Started iscsid.service. Oct 2 19:44:57.751146 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:44:57.755256 systemd-networkd[692]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:44:57.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.760476 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:44:57.761325 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:44:57.762139 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:44:57.763682 systemd[1]: Reached target remote-fs.target. Oct 2 19:44:57.767205 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:44:57.774404 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:44:57.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.777953 ignition[628]: Ignition 2.14.0 Oct 2 19:44:57.777965 ignition[628]: Stage: fetch-offline Oct 2 19:44:57.778300 ignition[628]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:44:57.778308 ignition[628]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:44:57.778399 ignition[628]: parsed url from cmdline: "" Oct 2 19:44:57.778402 ignition[628]: no config URL provided Oct 2 19:44:57.778406 ignition[628]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:44:57.778412 ignition[628]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:44:57.778429 ignition[628]: op(1): [started] loading QEMU firmware config module Oct 2 19:44:57.778436 ignition[628]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:44:57.787159 ignition[628]: op(1): [finished] loading QEMU firmware config module Oct 2 19:44:57.787186 ignition[628]: QEMU firmware config was not found. Ignoring... Oct 2 19:44:57.815532 ignition[628]: parsing config with SHA512: 3617290438fe7f03c0fd2aafd0ff8b15e407837f29fde353ebde16cffe8e3f2a65a7b42335e4d8faa1fd7a33a800cc3532f22794b76d5bc7797baedadcd06e71 Oct 2 19:44:57.833952 systemd-resolved[198]: Detected conflict on linux IN A 10.0.0.9 Oct 2 19:44:57.833973 systemd-resolved[198]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 2 19:44:57.842762 unknown[628]: fetched base config from "system" Oct 2 19:44:57.842775 unknown[628]: fetched user config from "qemu" Oct 2 19:44:57.843264 ignition[628]: fetch-offline: fetch-offline passed Oct 2 19:44:57.843314 ignition[628]: Ignition finished successfully Oct 2 19:44:57.844806 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:44:57.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.845751 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:44:57.846501 systemd[1]: Starting ignition-kargs.service... Oct 2 19:44:57.854714 ignition[718]: Ignition 2.14.0 Oct 2 19:44:57.854723 ignition[718]: Stage: kargs Oct 2 19:44:57.854830 ignition[718]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:44:57.854840 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:44:57.855854 ignition[718]: kargs: kargs passed Oct 2 19:44:57.857204 systemd[1]: Finished ignition-kargs.service. Oct 2 19:44:57.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.855890 ignition[718]: Ignition finished successfully Oct 2 19:44:57.858732 systemd[1]: Starting ignition-disks.service... Oct 2 19:44:57.864762 ignition[725]: Ignition 2.14.0 Oct 2 19:44:57.864772 ignition[725]: Stage: disks Oct 2 19:44:57.864864 ignition[725]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:44:57.864873 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:44:57.867803 ignition[725]: disks: disks passed Oct 2 19:44:57.867838 ignition[725]: Ignition finished successfully Oct 2 19:44:57.869234 systemd[1]: Finished ignition-disks.service. Oct 2 19:44:57.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.869860 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:44:57.870935 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:44:57.871075 systemd[1]: Reached target local-fs.target. Oct 2 19:44:57.871282 systemd[1]: Reached target sysinit.target. Oct 2 19:44:57.871497 systemd[1]: Reached target basic.target. Oct 2 19:44:57.874735 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:44:57.885272 systemd-fsck[733]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:44:57.889629 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:44:57.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.891250 systemd[1]: Mounting sysroot.mount... Oct 2 19:44:57.895757 systemd[1]: Mounted sysroot.mount. Oct 2 19:44:57.896006 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:44:57.895915 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:44:57.897989 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:44:57.898389 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:44:57.898429 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:44:57.898453 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:44:57.904176 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:44:57.904944 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:44:57.909059 initrd-setup-root[743]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:44:57.913354 initrd-setup-root[751]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:44:57.916934 initrd-setup-root[759]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:44:57.919399 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:44:57.943994 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:44:57.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.945207 systemd[1]: Starting ignition-mount.service... Oct 2 19:44:57.945826 systemd[1]: Starting sysroot-boot.service... Oct 2 19:44:57.950452 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:44:57.958758 ignition[786]: INFO : Ignition 2.14.0 Oct 2 19:44:57.958758 ignition[786]: INFO : Stage: mount Oct 2 19:44:57.960977 ignition[786]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:44:57.960977 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:44:57.960977 ignition[786]: INFO : mount: mount passed Oct 2 19:44:57.960977 ignition[786]: INFO : Ignition finished successfully Oct 2 19:44:57.963619 systemd[1]: Finished ignition-mount.service. Oct 2 19:44:57.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:57.966093 systemd[1]: Finished sysroot-boot.service. Oct 2 19:44:57.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:44:58.589730 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:44:58.595808 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (794) Oct 2 19:44:58.595833 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:44:58.595843 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:44:58.596848 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:44:58.600023 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:44:58.601827 systemd[1]: Starting ignition-files.service... Oct 2 19:44:58.615247 ignition[814]: INFO : Ignition 2.14.0 Oct 2 19:44:58.615247 ignition[814]: INFO : Stage: files Oct 2 19:44:58.616345 ignition[814]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:44:58.616345 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:44:58.617942 ignition[814]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:44:58.618735 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:44:58.618735 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:44:58.620666 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:44:58.620666 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:44:58.622473 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:44:58.622473 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 2 19:44:58.622473 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 2 19:44:58.622473 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:44:58.622473 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:44:58.620744 unknown[814]: wrote ssh authorized keys file for user: core Oct 2 19:44:58.815214 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:44:58.986048 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:44:58.986048 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:44:58.989164 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:44:58.989164 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:44:59.094643 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:44:59.149414 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:44:59.151291 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:44:59.151291 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:44:59.153405 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:44:59.240814 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:44:59.502916 systemd-networkd[692]: eth0: Gained IPv6LL Oct 2 19:44:59.870340 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:44:59.870340 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:44:59.873529 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:44:59.873529 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:44:59.934739 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Oct 2 19:45:01.520118 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:45:01.522204 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:45:01.522204 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Oct 2 19:45:01.522204 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubectl: attempt #1 Oct 2 19:45:01.583084 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Oct 2 19:45:02.328548 ignition[814]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 9006cd791c99f5421c09ae8f6029fdd0ea4608909f590dea41ba4dd5c500440272e9ece21489d1f192966717987251ded5394ea1dd4c5d091b700ac1c8cfa392 Oct 2 19:45:02.359943 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Oct 2 19:45:02.359943 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:45:02.359943 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:45:02.359943 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 2 19:45:02.359943 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.2/cilium-linux-amd64.tar.gz: attempt #1 Oct 2 19:45:02.542021 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 2 19:45:02.600359 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 2 19:45:02.600359 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:45:02.602972 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:45:02.602972 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 2 19:45:02.602972 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(13): [started] processing unit "containerd.service" Oct 2 19:45:02.602972 ignition[814]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(13): [finished] processing unit "containerd.service" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(15): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(17): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:45:02.625077 ignition[814]: INFO : files: op(17): op(18): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:45:02.643080 ignition[814]: INFO : files: op(17): op(18): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:45:02.644118 ignition[814]: INFO : files: op(17): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:45:02.644118 ignition[814]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:45:02.644118 ignition[814]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:45:02.644118 ignition[814]: INFO : files: files passed Oct 2 19:45:02.644118 ignition[814]: INFO : Ignition finished successfully Oct 2 19:45:02.652437 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:45:02.652457 kernel: audit: type=1130 audit(1696275902.645:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.645071 systemd[1]: Finished ignition-files.service. Oct 2 19:45:02.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.646529 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:45:02.656868 kernel: audit: type=1130 audit(1696275902.653:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.656882 kernel: audit: type=1130 audit(1696275902.656:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.656957 initrd-setup-root-after-ignition[838]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:45:02.662376 kernel: audit: type=1131 audit(1696275902.656:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.649855 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:45:02.663636 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:45:02.650493 systemd[1]: Starting ignition-quench.service... Oct 2 19:45:02.652830 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:45:02.653938 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:45:02.654006 systemd[1]: Finished ignition-quench.service. Oct 2 19:45:02.656981 systemd[1]: Reached target ignition-complete.target. Oct 2 19:45:02.661670 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:45:02.671894 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:45:02.671972 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:45:02.677741 kernel: audit: type=1130 audit(1696275902.673:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.677759 kernel: audit: type=1131 audit(1696275902.673:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.673136 systemd[1]: Reached target initrd-fs.target. Oct 2 19:45:02.677753 systemd[1]: Reached target initrd.target. Oct 2 19:45:02.678296 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:45:02.678936 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:45:02.688096 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:45:02.691300 kernel: audit: type=1130 audit(1696275902.687:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.688735 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:45:02.696886 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:45:02.697502 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:45:02.698535 systemd[1]: Stopped target timers.target. Oct 2 19:45:02.699556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:45:02.703406 kernel: audit: type=1131 audit(1696275902.699:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.699642 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:45:02.700618 systemd[1]: Stopped target initrd.target. Oct 2 19:45:02.703480 systemd[1]: Stopped target basic.target. Oct 2 19:45:02.704501 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:45:02.705530 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:45:02.706530 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:45:02.707630 systemd[1]: Stopped target remote-fs.target. Oct 2 19:45:02.708687 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:45:02.709733 systemd[1]: Stopped target sysinit.target. Oct 2 19:45:02.710707 systemd[1]: Stopped target local-fs.target. Oct 2 19:45:02.711694 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:45:02.712668 systemd[1]: Stopped target swap.target. Oct 2 19:45:02.717256 kernel: audit: type=1131 audit(1696275902.713:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.713571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:45:02.713655 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:45:02.721373 kernel: audit: type=1131 audit(1696275902.717:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.714648 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:45:02.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.717295 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:45:02.717387 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:45:02.718514 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:45:02.718598 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:45:02.721489 systemd[1]: Stopped target paths.target. Oct 2 19:45:02.722411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:45:02.725835 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:45:02.726616 systemd[1]: Stopped target slices.target. Oct 2 19:45:02.727563 systemd[1]: Stopped target sockets.target. Oct 2 19:45:02.728490 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:45:02.728559 systemd[1]: Closed iscsid.socket. Oct 2 19:45:02.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.729446 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:45:02.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.729552 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:45:02.730575 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:45:02.730660 systemd[1]: Stopped ignition-files.service. Oct 2 19:45:02.732098 systemd[1]: Stopping ignition-mount.service... Oct 2 19:45:02.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.732656 systemd[1]: Stopping iscsiuio.service... Oct 2 19:45:02.734066 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:45:02.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.734541 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:45:02.734647 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:45:02.739111 ignition[855]: INFO : Ignition 2.14.0 Oct 2 19:45:02.739111 ignition[855]: INFO : Stage: umount Oct 2 19:45:02.739111 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:45:02.739111 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:45:02.735775 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:45:02.742508 ignition[855]: INFO : umount: umount passed Oct 2 19:45:02.742508 ignition[855]: INFO : Ignition finished successfully Oct 2 19:45:02.736305 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:45:02.744947 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:45:02.745532 systemd[1]: Stopped iscsiuio.service. Oct 2 19:45:02.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.747517 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:45:02.748508 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:45:02.749127 systemd[1]: Stopped ignition-mount.service. Oct 2 19:45:02.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.750328 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:45:02.750978 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:45:02.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.752265 systemd[1]: Stopped target network.target. Oct 2 19:45:02.753265 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:45:02.753290 systemd[1]: Closed iscsiuio.socket. Oct 2 19:45:02.754643 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:45:02.755257 systemd[1]: Stopped ignition-disks.service. Oct 2 19:45:02.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.756258 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:45:02.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.756287 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:45:02.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.757423 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:45:02.757452 systemd[1]: Stopped ignition-setup.service. Oct 2 19:45:02.758462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:45:02.758490 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:45:02.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.761020 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:45:02.762108 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:45:02.763232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:45:02.763840 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:45:02.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.765816 systemd-networkd[692]: eth0: DHCPv6 lease lost Oct 2 19:45:02.766645 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:45:02.767302 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:45:02.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.768733 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:45:02.768759 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:45:02.771058 systemd[1]: Stopping network-cleanup.service... Oct 2 19:45:02.772331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:45:02.772000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:45:02.772390 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:45:02.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.792631 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:45:02.792664 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:45:02.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.794312 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:45:02.795055 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:45:02.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.796249 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:45:02.798091 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:45:02.799527 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:45:02.800359 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:45:02.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.802202 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:45:02.802982 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:45:02.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.805140 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:45:02.805869 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:45:02.807171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:45:02.807206 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:45:02.808000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:45:02.809126 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:45:02.809166 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:45:02.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.811010 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:45:02.811045 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:45:02.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.812765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:45:02.812808 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:45:02.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.815139 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:45:02.816212 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:45:02.816251 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:45:02.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.818025 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:45:02.818063 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:45:02.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.819644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:45:02.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.819674 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:45:02.821950 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:45:02.823161 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:45:02.823781 systemd[1]: Stopped network-cleanup.service. Oct 2 19:45:02.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.825012 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:45:02.825687 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:45:02.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:02.826968 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:45:02.828615 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:45:02.832986 systemd[1]: Switching root. Oct 2 19:45:02.836000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:45:02.836000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:45:02.836000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:45:02.836000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:45:02.836000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:45:02.849987 iscsid[702]: iscsid shutting down. Oct 2 19:45:02.850500 systemd-journald[196]: Journal stopped Oct 2 19:45:06.033065 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). Oct 2 19:45:06.033120 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:45:06.033135 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:45:06.033145 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:45:06.033155 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:45:06.033164 kernel: SELinux: policy capability open_perms=1 Oct 2 19:45:06.033174 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:45:06.033183 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:45:06.033192 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:45:06.033203 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:45:06.033213 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:45:06.033222 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:45:06.033232 systemd[1]: Successfully loaded SELinux policy in 36.090ms. Oct 2 19:45:06.033251 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.478ms. Oct 2 19:45:06.033271 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:45:06.033282 systemd[1]: Detected virtualization kvm. Oct 2 19:45:06.033291 systemd[1]: Detected architecture x86-64. Oct 2 19:45:06.033301 systemd[1]: Detected first boot. Oct 2 19:45:06.033313 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:45:06.033323 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:45:06.033338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:45:06.033348 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:45:06.033359 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:45:06.033370 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:45:06.033382 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:45:06.033393 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:45:06.033403 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:45:06.033415 systemd[1]: Created slice system-getty.slice. Oct 2 19:45:06.033425 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:45:06.033435 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:45:06.033446 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:45:06.033456 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:45:06.033466 systemd[1]: Created slice user.slice. Oct 2 19:45:06.033477 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:45:06.033488 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:45:06.033497 systemd[1]: Set up automount boot.automount. Oct 2 19:45:06.033507 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:45:06.033518 systemd[1]: Reached target integritysetup.target. Oct 2 19:45:06.033528 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:45:06.033538 systemd[1]: Reached target remote-fs.target. Oct 2 19:45:06.033548 systemd[1]: Reached target slices.target. Oct 2 19:45:06.033559 systemd[1]: Reached target swap.target. Oct 2 19:45:06.033569 systemd[1]: Reached target torcx.target. Oct 2 19:45:06.033579 systemd[1]: Reached target veritysetup.target. Oct 2 19:45:06.033589 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:45:06.033600 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:45:06.033610 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:45:06.033620 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:45:06.033630 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:45:06.033639 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:45:06.033649 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:45:06.033660 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:45:06.033671 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:45:06.033681 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:45:06.033691 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:45:06.033701 systemd[1]: Mounting media.mount... Oct 2 19:45:06.033712 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:45:06.033722 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:45:06.033732 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:45:06.033742 systemd[1]: Mounting tmp.mount... Oct 2 19:45:06.033753 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:45:06.033763 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:45:06.033773 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:45:06.033794 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:45:06.033804 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:45:06.033814 systemd[1]: Starting modprobe@drm.service... Oct 2 19:45:06.033824 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:45:06.033834 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:45:06.033844 systemd[1]: Starting modprobe@loop.service... Oct 2 19:45:06.033856 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:45:06.033867 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 2 19:45:06.033877 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 2 19:45:06.033886 systemd[1]: Starting systemd-journald.service... Oct 2 19:45:06.033896 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:45:06.033906 kernel: fuse: init (API version 7.34) Oct 2 19:45:06.033917 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:45:06.033926 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:45:06.033936 kernel: loop: module loaded Oct 2 19:45:06.033947 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:45:06.033957 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:45:06.033966 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:45:06.033976 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:45:06.033986 systemd[1]: Mounted media.mount. Oct 2 19:45:06.033996 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:45:06.034006 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:45:06.034016 systemd[1]: Mounted tmp.mount. Oct 2 19:45:06.034025 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:45:06.034039 systemd-journald[995]: Journal started Oct 2 19:45:06.034076 systemd-journald[995]: Runtime Journal (/run/log/journal/2806a812693d422ea44d06fe612e82fc) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:45:05.968000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:45:05.968000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 2 19:45:06.031000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:45:06.031000 audit[995]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe5fcd99b0 a2=4000 a3=7ffe5fcd9a4c items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:06.031000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:45:06.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.035875 systemd[1]: Started systemd-journald.service. Oct 2 19:45:06.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.036271 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:45:06.036406 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:45:06.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.037290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:45:06.037454 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:45:06.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.038250 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:45:06.038407 systemd[1]: Finished modprobe@drm.service. Oct 2 19:45:06.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.039128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:45:06.039320 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:45:06.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.040138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:45:06.040278 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:45:06.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.040998 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:45:06.041179 systemd[1]: Finished modprobe@loop.service. Oct 2 19:45:06.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.042036 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:45:06.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.042916 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:45:06.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.044003 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:45:06.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.045030 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:45:06.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.045900 systemd[1]: Reached target network-pre.target. Oct 2 19:45:06.047509 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:45:06.049065 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:45:06.049578 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:45:06.050854 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:45:06.052102 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:45:06.052650 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:45:06.054397 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:45:06.055037 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:45:06.055817 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:45:06.057964 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:45:06.061853 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:45:06.062533 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:45:06.064503 systemd-journald[995]: Time spent on flushing to /var/log/journal/2806a812693d422ea44d06fe612e82fc is 18.380ms for 1049 entries. Oct 2 19:45:06.064503 systemd-journald[995]: System Journal (/var/log/journal/2806a812693d422ea44d06fe612e82fc) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:45:06.095936 systemd-journald[995]: Received client request to flush runtime journal. Oct 2 19:45:06.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.071203 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:45:06.097000 udevadm[1043]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:45:06.077144 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:45:06.079101 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:45:06.080466 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:45:06.081102 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:45:06.082933 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:45:06.084653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:45:06.096835 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:45:06.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.106040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:45:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.653455 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:45:06.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.656266 systemd[1]: Starting systemd-udevd.service... Oct 2 19:45:06.673557 systemd-udevd[1053]: Using default interface naming scheme 'v252'. Oct 2 19:45:06.687488 systemd[1]: Started systemd-udevd.service. Oct 2 19:45:06.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.690359 systemd[1]: Starting systemd-networkd.service... Oct 2 19:45:06.697952 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:45:06.732645 systemd[1]: Started systemd-userdbd.service. Oct 2 19:45:06.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.734406 systemd[1]: Found device dev-ttyS0.device. Oct 2 19:45:06.740492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:45:06.771000 audit[1056]: AVC avc: denied { confidentiality } for pid=1056 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:45:06.771000 audit[1056]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d7d8fc0660 a1=32194 a2=7f7143c8abc5 a3=5 items=106 ppid=1053 pid=1056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:06.771000 audit: CWD cwd="/" Oct 2 19:45:06.771000 audit: PATH item=0 name=(null) inode=14757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=1 name=(null) inode=14758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=2 name=(null) inode=14757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=3 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=4 name=(null) inode=14757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=5 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=6 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=7 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=8 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=9 name=(null) inode=14762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=10 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=11 name=(null) inode=14763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=12 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=13 name=(null) inode=14764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=14 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=15 name=(null) inode=14765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=16 name=(null) inode=14757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=17 name=(null) inode=14766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=18 name=(null) inode=14766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=19 name=(null) inode=14767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=20 name=(null) inode=14766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=21 name=(null) inode=14768 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=22 name=(null) inode=14766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=23 name=(null) inode=14769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=24 name=(null) inode=14766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=25 name=(null) inode=14770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=26 name=(null) inode=14766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=27 name=(null) inode=14771 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=28 name=(null) inode=14757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=29 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=30 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=31 name=(null) inode=14773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=32 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=33 name=(null) inode=14774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=34 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=35 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=36 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=37 name=(null) inode=14776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=38 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=39 name=(null) inode=14777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=40 name=(null) inode=14757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=41 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=42 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=43 name=(null) inode=14779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=44 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=45 name=(null) inode=14780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=46 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=47 name=(null) inode=14781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=48 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=49 name=(null) inode=14782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=50 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=51 name=(null) inode=14783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=52 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=53 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=54 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=55 name=(null) inode=14785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=56 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=57 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=58 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=59 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=60 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=61 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=62 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=63 name=(null) inode=14789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=64 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=65 name=(null) inode=14790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=66 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=67 name=(null) inode=14791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=68 name=(null) inode=14787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=69 name=(null) inode=14792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=70 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=71 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=72 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=73 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=74 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=75 name=(null) inode=14795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=76 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=77 name=(null) inode=14796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=78 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=79 name=(null) inode=14797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=80 name=(null) inode=14793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=81 name=(null) inode=14798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=82 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=83 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=84 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=85 name=(null) inode=14800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=86 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=87 name=(null) inode=14801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=88 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=89 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=90 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.779834 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:45:06.771000 audit: PATH item=91 name=(null) inode=14803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=92 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=93 name=(null) inode=14804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=94 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=95 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=96 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=97 name=(null) inode=14806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=98 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=99 name=(null) inode=14807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=100 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=101 name=(null) inode=14808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=102 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=103 name=(null) inode=14809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=104 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PATH item=105 name=(null) inode=14810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:45:06.771000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:45:06.781632 systemd-networkd[1065]: lo: Link UP Oct 2 19:45:06.781647 systemd-networkd[1065]: lo: Gained carrier Oct 2 19:45:06.782107 systemd-networkd[1065]: Enumeration completed Oct 2 19:45:06.782256 systemd-networkd[1065]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:45:06.782257 systemd[1]: Started systemd-networkd.service. Oct 2 19:45:06.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.783502 systemd-networkd[1065]: eth0: Link UP Oct 2 19:45:06.783514 systemd-networkd[1065]: eth0: Gained carrier Oct 2 19:45:06.794818 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 2 19:45:06.795188 systemd-networkd[1065]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:45:06.818820 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 2 19:45:06.824845 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:45:06.827812 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:45:06.877036 kernel: kvm: Nested Virtualization enabled Oct 2 19:45:06.877172 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:45:06.894821 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:45:06.913174 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:45:06.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.914945 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:45:06.929054 lvm[1090]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:45:06.962620 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:45:06.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.963420 systemd[1]: Reached target cryptsetup.target. Oct 2 19:45:06.965095 systemd[1]: Starting lvm2-activation.service... Oct 2 19:45:06.969025 lvm[1092]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:45:06.993035 systemd[1]: Finished lvm2-activation.service. Oct 2 19:45:06.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:06.993733 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:45:06.994315 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:45:06.994341 systemd[1]: Reached target local-fs.target. Oct 2 19:45:06.994883 systemd[1]: Reached target machines.target. Oct 2 19:45:06.996651 systemd[1]: Starting ldconfig.service... Oct 2 19:45:06.997617 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:45:06.997665 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:45:06.998562 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:45:06.999882 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:45:07.002207 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:45:07.002989 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:45:07.003026 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:45:07.003856 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:45:07.004698 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1095 (bootctl) Oct 2 19:45:07.007292 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:45:07.013087 systemd-tmpfiles[1098]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:45:07.013800 systemd-tmpfiles[1098]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:45:07.015333 systemd-tmpfiles[1098]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:45:07.020646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:45:07.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:07.048085 systemd-fsck[1103]: fsck.fat 4.2 (2021-01-31) Oct 2 19:45:07.048085 systemd-fsck[1103]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:45:07.050340 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:45:07.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:07.053041 systemd[1]: Mounting boot.mount... Oct 2 19:45:07.087874 systemd[1]: Mounted boot.mount. Oct 2 19:45:08.004439 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:45:08.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.007837 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:45:08.007891 kernel: audit: type=1130 audit(1696275908.004:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.095834 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:45:08.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.098206 systemd[1]: Starting audit-rules.service... Oct 2 19:45:08.099813 kernel: audit: type=1130 audit(1696275908.095:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.100647 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:45:08.102690 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:45:08.105458 systemd[1]: Starting systemd-resolved.service... Oct 2 19:45:08.110590 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:45:08.142861 systemd-networkd[1065]: eth0: Gained IPv6LL Oct 2 19:45:08.153360 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:45:08.154379 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:45:08.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.155232 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:45:08.158146 kernel: audit: type=1130 audit(1696275908.154:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.157000 audit[1125]: SYSTEM_BOOT pid=1125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.161824 kernel: audit: type=1127 audit(1696275908.157:122): pid=1125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.162111 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:45:08.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.166981 kernel: audit: type=1130 audit(1696275908.162:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:08.180849 augenrules[1133]: No rules Oct 2 19:45:08.179000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:45:08.179000 audit[1133]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd07044a10 a2=420 a3=0 items=0 ppid=1111 pid=1133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:08.179000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:45:08.182849 systemd[1]: Finished audit-rules.service. Oct 2 19:45:08.183829 kernel: audit: type=1305 audit(1696275908.179:124): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:45:08.183876 kernel: audit: type=1300 audit(1696275908.179:124): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd07044a10 a2=420 a3=0 items=0 ppid=1111 pid=1133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:08.183909 kernel: audit: type=1327 audit(1696275908.179:124): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:45:08.193310 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:45:08.362587 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:45:08.363138 systemd-timesyncd[1122]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:45:08.363180 systemd-timesyncd[1122]: Initial clock synchronization to Mon 2023-10-02 19:45:08.420755 UTC. Oct 2 19:45:08.363616 systemd[1]: Reached target time-set.target. Oct 2 19:45:08.365351 systemd-resolved[1121]: Positive Trust Anchors: Oct 2 19:45:08.365361 systemd-resolved[1121]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:45:08.365386 systemd-resolved[1121]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:45:08.379836 systemd-resolved[1121]: Defaulting to hostname 'linux'. Oct 2 19:45:08.381381 systemd[1]: Started systemd-resolved.service. Oct 2 19:45:08.414436 systemd[1]: Reached target network.target. Oct 2 19:45:08.414995 systemd[1]: Reached target nss-lookup.target. Oct 2 19:45:08.564069 ldconfig[1094]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:45:08.696396 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:45:08.697120 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:45:08.698817 systemd[1]: Finished ldconfig.service. Oct 2 19:45:08.700994 systemd[1]: Starting systemd-update-done.service... Oct 2 19:45:08.706039 systemd[1]: Finished systemd-update-done.service. Oct 2 19:45:08.706679 systemd[1]: Reached target sysinit.target. Oct 2 19:45:08.707280 systemd[1]: Started motdgen.path. Oct 2 19:45:08.707753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:45:08.708597 systemd[1]: Started logrotate.timer. Oct 2 19:45:08.709162 systemd[1]: Started mdadm.timer. Oct 2 19:45:08.709613 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:45:08.710179 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:45:08.710207 systemd[1]: Reached target paths.target. Oct 2 19:45:08.710703 systemd[1]: Reached target timers.target. Oct 2 19:45:08.711479 systemd[1]: Listening on dbus.socket. Oct 2 19:45:08.713198 systemd[1]: Starting docker.socket... Oct 2 19:45:08.714638 systemd[1]: Listening on sshd.socket. Oct 2 19:45:08.715242 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:45:08.715571 systemd[1]: Listening on docker.socket. Oct 2 19:45:08.716217 systemd[1]: Reached target sockets.target. Oct 2 19:45:08.716821 systemd[1]: Reached target basic.target. Oct 2 19:45:08.717624 systemd[1]: System is tainted: cgroupsv1 Oct 2 19:45:08.717673 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:45:08.717690 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:45:08.718593 systemd[1]: Starting containerd.service... Oct 2 19:45:08.719850 systemd[1]: Starting dbus.service... Oct 2 19:45:08.721147 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:45:08.722717 systemd[1]: Starting extend-filesystems.service... Oct 2 19:45:08.723329 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:45:08.724898 systemd[1]: Starting motdgen.service... Oct 2 19:45:08.726227 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:45:08.727005 jq[1150]: false Oct 2 19:45:08.728153 systemd[1]: Starting prepare-critools.service... Oct 2 19:45:08.729667 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:45:08.731152 systemd[1]: Starting sshd-keygen.service... Oct 2 19:45:08.733344 systemd[1]: Starting systemd-logind.service... Oct 2 19:45:08.734190 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:45:08.734245 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:45:08.735422 systemd[1]: Starting update-engine.service... Oct 2 19:45:08.747242 jq[1171]: true Oct 2 19:45:08.739147 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:45:08.741531 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:45:08.741748 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:45:08.743128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:45:08.743336 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:45:08.754008 jq[1181]: true Oct 2 19:45:08.754964 tar[1174]: ./ Oct 2 19:45:08.754964 tar[1174]: ./macvlan Oct 2 19:45:08.756423 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:45:08.756631 systemd[1]: Finished motdgen.service. Oct 2 19:45:08.765739 tar[1176]: crictl Oct 2 19:45:08.784079 extend-filesystems[1151]: Found sr0 Oct 2 19:45:08.785668 extend-filesystems[1151]: Found vda Oct 2 19:45:08.788874 extend-filesystems[1151]: Found vda1 Oct 2 19:45:08.800523 extend-filesystems[1151]: Found vda2 Oct 2 19:45:08.800523 extend-filesystems[1151]: Found vda3 Oct 2 19:45:08.800523 extend-filesystems[1151]: Found usr Oct 2 19:45:08.800523 extend-filesystems[1151]: Found vda4 Oct 2 19:45:08.800523 extend-filesystems[1151]: Found vda6 Oct 2 19:45:08.800523 extend-filesystems[1151]: Found vda7 Oct 2 19:45:08.800523 extend-filesystems[1151]: Found vda9 Oct 2 19:45:08.800523 extend-filesystems[1151]: Checking size of /dev/vda9 Oct 2 19:45:08.808684 tar[1174]: ./static Oct 2 19:45:08.803737 systemd[1]: Started dbus.service. Oct 2 19:45:08.803560 dbus-daemon[1148]: [system] SELinux support is enabled Oct 2 19:45:08.806534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:45:08.806555 systemd[1]: Reached target system-config.target. Oct 2 19:45:08.807239 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:45:08.807252 systemd[1]: Reached target user-config.target. Oct 2 19:45:08.814230 extend-filesystems[1151]: Old size kept for /dev/vda9 Oct 2 19:45:08.823989 bash[1205]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:45:08.824202 update_engine[1166]: I1002 19:45:08.819010 1166 main.cc:92] Flatcar Update Engine starting Oct 2 19:45:08.824202 update_engine[1166]: I1002 19:45:08.822889 1166 update_check_scheduler.cc:74] Next update check in 6m18s Oct 2 19:45:08.814602 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:45:08.814868 systemd[1]: Finished extend-filesystems.service. Oct 2 19:45:08.820369 systemd-logind[1162]: Watching system buttons on /dev/input/event2 (Power Button) Oct 2 19:45:08.820384 systemd-logind[1162]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:45:08.823156 systemd-logind[1162]: New seat seat0. Oct 2 19:45:08.831034 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:45:08.832339 systemd[1]: Started update-engine.service. Oct 2 19:45:08.838793 systemd[1]: Started systemd-logind.service. Oct 2 19:45:08.841274 systemd[1]: Started locksmithd.service. Oct 2 19:45:08.843489 env[1178]: time="2023-10-02T19:45:08.841740836Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:45:08.852221 tar[1174]: ./vlan Oct 2 19:45:08.879799 tar[1174]: ./portmap Oct 2 19:45:08.881253 env[1178]: time="2023-10-02T19:45:08.881221577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:45:08.881446 env[1178]: time="2023-10-02T19:45:08.881430168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:45:08.882943 env[1178]: time="2023-10-02T19:45:08.882922366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:45:08.883024 env[1178]: time="2023-10-02T19:45:08.883006274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:45:08.883319 env[1178]: time="2023-10-02T19:45:08.883301958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:45:08.883392 env[1178]: time="2023-10-02T19:45:08.883374825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:45:08.883470 env[1178]: time="2023-10-02T19:45:08.883451168Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:45:08.883543 env[1178]: time="2023-10-02T19:45:08.883525237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:45:08.883686 env[1178]: time="2023-10-02T19:45:08.883666261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:45:08.884128 env[1178]: time="2023-10-02T19:45:08.884072714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:45:08.884485 env[1178]: time="2023-10-02T19:45:08.884461593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:45:08.884628 env[1178]: time="2023-10-02T19:45:08.884609911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:45:08.884947 env[1178]: time="2023-10-02T19:45:08.884851394Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:45:08.885071 env[1178]: time="2023-10-02T19:45:08.885010743Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.891953413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.891978630Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.891991073Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892020799Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892033543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892045445Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892056546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892068428Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892079660Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892091402Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892101611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892112401Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892190417Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:45:08.893805 env[1178]: time="2023-10-02T19:45:08.892248446Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892537027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892557967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892569488Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892604263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892614803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892627617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892637255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892647965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892657974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892667452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892677380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892688111Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892779382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892805230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894097 env[1178]: time="2023-10-02T19:45:08.892815189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894394 env[1178]: time="2023-10-02T19:45:08.892824787Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:45:08.894394 env[1178]: time="2023-10-02T19:45:08.892837751Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:45:08.894394 env[1178]: time="2023-10-02T19:45:08.892846728Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:45:08.894394 env[1178]: time="2023-10-02T19:45:08.892862117Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:45:08.894394 env[1178]: time="2023-10-02T19:45:08.892891252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893079104Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893122285Z" level=info msg="Connect containerd service" Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893147171Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893528527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893701010Z" level=info msg="Start subscribing containerd event" Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893735375Z" level=info msg="Start recovering state" Oct 2 19:45:08.894491 env[1178]: time="2023-10-02T19:45:08.893772965Z" level=info msg="Start event monitor" Oct 2 19:45:08.897414 env[1178]: time="2023-10-02T19:45:08.894579478Z" level=info msg="Start snapshots syncer" Oct 2 19:45:08.897414 env[1178]: time="2023-10-02T19:45:08.894595848Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:45:08.897414 env[1178]: time="2023-10-02T19:45:08.894602080Z" level=info msg="Start streaming server" Oct 2 19:45:08.897414 env[1178]: time="2023-10-02T19:45:08.895940360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:45:08.897414 env[1178]: time="2023-10-02T19:45:08.896060745Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:45:08.897352 systemd[1]: Started containerd.service. Oct 2 19:45:08.904185 env[1178]: time="2023-10-02T19:45:08.904160836Z" level=info msg="containerd successfully booted in 0.064692s" Oct 2 19:45:08.908618 tar[1174]: ./host-local Oct 2 19:45:08.934071 tar[1174]: ./vrf Oct 2 19:45:08.961736 tar[1174]: ./bridge Oct 2 19:45:08.967635 locksmithd[1212]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:45:08.995183 tar[1174]: ./tuning Oct 2 19:45:09.021841 tar[1174]: ./firewall Oct 2 19:45:09.032005 systemd[1]: Finished prepare-critools.service. Oct 2 19:45:09.053663 tar[1174]: ./host-device Oct 2 19:45:09.079961 tar[1174]: ./sbr Oct 2 19:45:09.103943 tar[1174]: ./loopback Oct 2 19:45:09.126820 tar[1174]: ./dhcp Oct 2 19:45:09.192804 tar[1174]: ./ptp Oct 2 19:45:09.221096 tar[1174]: ./ipvlan Oct 2 19:45:09.248309 tar[1174]: ./bandwidth Oct 2 19:45:09.282283 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:45:10.886904 sshd_keygen[1180]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:45:10.903468 systemd[1]: Finished sshd-keygen.service. Oct 2 19:45:10.905345 systemd[1]: Starting issuegen.service... Oct 2 19:45:10.909420 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:45:10.909570 systemd[1]: Finished issuegen.service. Oct 2 19:45:10.911253 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:45:10.915718 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:45:10.917527 systemd[1]: Started getty@tty1.service. Oct 2 19:45:10.918851 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:45:10.919528 systemd[1]: Reached target getty.target. Oct 2 19:45:10.920099 systemd[1]: Reached target multi-user.target. Oct 2 19:45:10.921505 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:45:10.927423 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:45:10.927612 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:45:10.929772 systemd[1]: Startup finished in 8.640s (kernel) + 7.139s (userspace) = 15.779s. Oct 2 19:45:17.942965 systemd[1]: Created slice system-sshd.slice. Oct 2 19:45:17.944006 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:54426.service. Oct 2 19:45:17.990576 sshd[1252]: Accepted publickey for core from 10.0.0.1 port 54426 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:17.992074 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.001319 systemd-logind[1162]: New session 1 of user core. Oct 2 19:45:18.002272 systemd[1]: Created slice user-500.slice. Oct 2 19:45:18.003179 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:45:18.010394 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:45:18.011528 systemd[1]: Starting user@500.service... Oct 2 19:45:18.014107 (systemd)[1257]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.074341 systemd[1257]: Queued start job for default target default.target. Oct 2 19:45:18.074515 systemd[1257]: Reached target paths.target. Oct 2 19:45:18.074536 systemd[1257]: Reached target sockets.target. Oct 2 19:45:18.074553 systemd[1257]: Reached target timers.target. Oct 2 19:45:18.074567 systemd[1257]: Reached target basic.target. Oct 2 19:45:18.074609 systemd[1257]: Reached target default.target. Oct 2 19:45:18.074635 systemd[1257]: Startup finished in 56ms. Oct 2 19:45:18.074700 systemd[1]: Started user@500.service. Oct 2 19:45:18.075477 systemd[1]: Started session-1.scope. Oct 2 19:45:18.124814 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:54442.service. Oct 2 19:45:18.163863 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:18.165299 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.168750 systemd-logind[1162]: New session 2 of user core. Oct 2 19:45:18.169427 systemd[1]: Started session-2.scope. Oct 2 19:45:18.223247 sshd[1266]: pam_unix(sshd:session): session closed for user core Oct 2 19:45:18.225773 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:54446.service. Oct 2 19:45:18.226417 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:54442.service: Deactivated successfully. Oct 2 19:45:18.227253 systemd-logind[1162]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:45:18.227313 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:45:18.228529 systemd-logind[1162]: Removed session 2. Oct 2 19:45:18.264312 sshd[1272]: Accepted publickey for core from 10.0.0.1 port 54446 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:18.265322 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.268410 systemd-logind[1162]: New session 3 of user core. Oct 2 19:45:18.269316 systemd[1]: Started session-3.scope. Oct 2 19:45:18.317387 sshd[1272]: pam_unix(sshd:session): session closed for user core Oct 2 19:45:18.319770 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:54458.service. Oct 2 19:45:18.320325 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:54446.service: Deactivated successfully. Oct 2 19:45:18.321172 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:45:18.321249 systemd-logind[1162]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:45:18.322227 systemd-logind[1162]: Removed session 3. Oct 2 19:45:18.358630 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 54458 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:18.359608 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.362711 systemd-logind[1162]: New session 4 of user core. Oct 2 19:45:18.363595 systemd[1]: Started session-4.scope. Oct 2 19:45:18.416076 sshd[1279]: pam_unix(sshd:session): session closed for user core Oct 2 19:45:18.418503 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:54472.service. Oct 2 19:45:18.419068 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:54458.service: Deactivated successfully. Oct 2 19:45:18.419871 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:45:18.419874 systemd-logind[1162]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:45:18.420686 systemd-logind[1162]: Removed session 4. Oct 2 19:45:18.456894 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 54472 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:18.457869 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.461433 systemd-logind[1162]: New session 5 of user core. Oct 2 19:45:18.462496 systemd[1]: Started session-5.scope. Oct 2 19:45:18.518493 sudo[1291]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:45:18.518642 sudo[1291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:45:18.524971 dbus-daemon[1148]: \xd0\xedT\x89\x8eU: received setenforce notice (enforcing=-867190288) Oct 2 19:45:18.527101 sudo[1291]: pam_unix(sudo:session): session closed for user root Oct 2 19:45:18.528912 sshd[1286]: pam_unix(sshd:session): session closed for user core Oct 2 19:45:18.531022 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:54476.service. Oct 2 19:45:18.531535 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:54472.service: Deactivated successfully. Oct 2 19:45:18.532379 systemd-logind[1162]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:45:18.532441 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:45:18.533435 systemd-logind[1162]: Removed session 5. Oct 2 19:45:18.573121 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 54476 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:18.574230 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.577074 systemd-logind[1162]: New session 6 of user core. Oct 2 19:45:18.577729 systemd[1]: Started session-6.scope. Oct 2 19:45:18.628775 sudo[1300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:45:18.628944 sudo[1300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:45:18.631111 sudo[1300]: pam_unix(sudo:session): session closed for user root Oct 2 19:45:18.635596 sudo[1299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:45:18.635824 sudo[1299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:45:18.642986 systemd[1]: Stopping audit-rules.service... Oct 2 19:45:18.642000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:45:18.643856 auditctl[1303]: No rules Oct 2 19:45:18.644197 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:45:18.644410 systemd[1]: Stopped audit-rules.service. Oct 2 19:45:18.642000 audit[1303]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3efcaf20 a2=420 a3=0 items=0 ppid=1 pid=1303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:18.645990 systemd[1]: Starting audit-rules.service... Oct 2 19:45:18.648738 kernel: audit: type=1305 audit(1696275918.642:125): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:45:18.648812 kernel: audit: type=1300 audit(1696275918.642:125): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3efcaf20 a2=420 a3=0 items=0 ppid=1 pid=1303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:18.648839 kernel: audit: type=1327 audit(1696275918.642:125): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:45:18.642000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:45:18.649811 kernel: audit: type=1131 audit(1696275918.642:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.659007 augenrules[1321]: No rules Oct 2 19:45:18.659506 systemd[1]: Finished audit-rules.service. Oct 2 19:45:18.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.660239 sudo[1299]: pam_unix(sudo:session): session closed for user root Oct 2 19:45:18.661514 sshd[1293]: pam_unix(sshd:session): session closed for user core Oct 2 19:45:18.658000 audit[1299]: USER_END pid=1299 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.663457 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:54488.service. Oct 2 19:45:18.667444 kernel: audit: type=1130 audit(1696275918.658:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.667589 kernel: audit: type=1106 audit(1696275918.658:128): pid=1299 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.667632 kernel: audit: type=1104 audit(1696275918.658:129): pid=1299 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.658000 audit[1299]: CRED_DISP pid=1299 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.667722 kernel: audit: type=1106 audit(1696275918.661:130): pid=1293 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.661000 audit[1293]: USER_END pid=1293 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.664238 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:54476.service: Deactivated successfully. Oct 2 19:45:18.664961 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:45:18.665396 systemd-logind[1162]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:45:18.666140 systemd-logind[1162]: Removed session 6. Oct 2 19:45:18.670447 kernel: audit: type=1104 audit(1696275918.661:131): pid=1293 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.661000 audit[1293]: CRED_DISP pid=1293 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.9:22-10.0.0.1:54488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.674608 kernel: audit: type=1130 audit(1696275918.661:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.9:22-10.0.0.1:54488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.9:22-10.0.0.1:54476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.703000 audit[1326]: USER_ACCT pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.704487 sshd[1326]: Accepted publickey for core from 10.0.0.1 port 54488 ssh2: RSA SHA256:+DuZ6jeJ+85lrS0QTsE47nySPANyTUNJwedXcdEfg68 Oct 2 19:45:18.704000 audit[1326]: CRED_ACQ pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.704000 audit[1326]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb47af260 a2=3 a3=0 items=0 ppid=1 pid=1326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:18.704000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:45:18.705403 sshd[1326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:45:18.708507 systemd-logind[1162]: New session 7 of user core. Oct 2 19:45:18.709230 systemd[1]: Started session-7.scope. Oct 2 19:45:18.711000 audit[1326]: USER_START pid=1326 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.712000 audit[1331]: CRED_ACQ pid=1331 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:45:18.758000 audit[1332]: USER_ACCT pid=1332 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.758000 audit[1332]: CRED_REFR pid=1332 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:18.760105 sudo[1332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:45:18.760312 sudo[1332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:45:18.759000 audit[1332]: USER_START pid=1332 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:45:19.281460 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:45:19.286250 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:45:19.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:19.286484 systemd[1]: Reached target network-online.target. Oct 2 19:45:19.287607 systemd[1]: Starting docker.service... Oct 2 19:45:19.320011 env[1351]: time="2023-10-02T19:45:19.319971023Z" level=info msg="Starting up" Oct 2 19:45:19.321490 env[1351]: time="2023-10-02T19:45:19.321445838Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 2 19:45:19.321490 env[1351]: time="2023-10-02T19:45:19.321480269Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 2 19:45:19.321565 env[1351]: time="2023-10-02T19:45:19.321501920Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 2 19:45:19.321565 env[1351]: time="2023-10-02T19:45:19.321511898Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 2 19:45:19.322996 env[1351]: time="2023-10-02T19:45:19.322976846Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 2 19:45:19.322996 env[1351]: time="2023-10-02T19:45:19.322992124Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 2 19:45:19.323068 env[1351]: time="2023-10-02T19:45:19.323006849Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 2 19:45:19.323068 env[1351]: time="2023-10-02T19:45:19.323014749Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 2 19:45:24.726194 env[1351]: time="2023-10-02T19:45:24.726157039Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 2 19:45:24.726194 env[1351]: time="2023-10-02T19:45:24.726183143Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 2 19:45:24.726573 env[1351]: time="2023-10-02T19:45:24.726384978Z" level=info msg="Loading containers: start." Oct 2 19:45:24.762000 audit[1385]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.764762 kernel: kauditd_printk_skb: 12 callbacks suppressed Oct 2 19:45:24.764825 kernel: audit: type=1325 audit(1696275924.762:143): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.764845 kernel: audit: type=1300 audit(1696275924.762:143): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd51b3a5a0 a2=0 a3=7ffd51b3a58c items=0 ppid=1351 pid=1385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.762000 audit[1385]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd51b3a5a0 a2=0 a3=7ffd51b3a58c items=0 ppid=1351 pid=1385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.768806 kernel: audit: type=1327 audit(1696275924.762:143): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 2 19:45:24.768847 kernel: audit: type=1325 audit(1696275924.763:144): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.762000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 2 19:45:24.763000 audit[1387]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.770182 kernel: audit: type=1300 audit(1696275924.763:144): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd4365b5a0 a2=0 a3=7ffd4365b58c items=0 ppid=1351 pid=1387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.763000 audit[1387]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd4365b5a0 a2=0 a3=7ffd4365b58c items=0 ppid=1351 pid=1387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.763000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 2 19:45:24.774239 kernel: audit: type=1327 audit(1696275924.763:144): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 2 19:45:24.774262 kernel: audit: type=1325 audit(1696275924.764:145): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.764000 audit[1389]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.775599 kernel: audit: type=1300 audit(1696275924.764:145): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7c66bf90 a2=0 a3=7ffc7c66bf7c items=0 ppid=1351 pid=1389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.764000 audit[1389]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7c66bf90 a2=0 a3=7ffc7c66bf7c items=0 ppid=1351 pid=1389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.778395 kernel: audit: type=1327 audit(1696275924.764:145): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 2 19:45:24.764000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 2 19:45:24.765000 audit[1391]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.781242 kernel: audit: type=1325 audit(1696275924.765:146): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.765000 audit[1391]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd21012850 a2=0 a3=7ffd2101283c items=0 ppid=1351 pid=1391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.765000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 2 19:45:24.767000 audit[1393]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.767000 audit[1393]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffce70e2560 a2=0 a3=7ffce70e254c items=0 ppid=1351 pid=1393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.767000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 2 19:45:24.803000 audit[1398]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:24.803000 audit[1398]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6c3ef850 a2=0 a3=7ffc6c3ef83c items=0 ppid=1351 pid=1398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:24.803000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 2 19:45:25.234000 audit[1400]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.234000 audit[1400]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5b1db9c0 a2=0 a3=7ffe5b1db9ac items=0 ppid=1351 pid=1400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.234000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 2 19:45:25.235000 audit[1402]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.235000 audit[1402]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffe6b0c370 a2=0 a3=7fffe6b0c35c items=0 ppid=1351 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.235000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 2 19:45:25.237000 audit[1404]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.237000 audit[1404]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff70d68950 a2=0 a3=7fff70d6893c items=0 ppid=1351 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.237000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 2 19:45:25.403000 audit[1408]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.403000 audit[1408]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffe764aa60 a2=0 a3=7fffe764aa4c items=0 ppid=1351 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.403000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 2 19:45:25.403000 audit[1409]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.403000 audit[1409]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd67c87a50 a2=0 a3=7ffd67c87a3c items=0 ppid=1351 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.403000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 2 19:45:25.410808 kernel: Initializing XFRM netlink socket Oct 2 19:45:25.439332 env[1351]: time="2023-10-02T19:45:25.439301700Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 2 19:45:25.453000 audit[1417]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.453000 audit[1417]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffec17a0550 a2=0 a3=7ffec17a053c items=0 ppid=1351 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.453000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 2 19:45:25.463000 audit[1420]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.463000 audit[1420]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffeaf114d90 a2=0 a3=7ffeaf114d7c items=0 ppid=1351 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.463000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 2 19:45:25.465000 audit[1423]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.465000 audit[1423]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcb846f2e0 a2=0 a3=7ffcb846f2cc items=0 ppid=1351 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.465000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 2 19:45:25.466000 audit[1425]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.466000 audit[1425]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd93535520 a2=0 a3=7ffd9353550c items=0 ppid=1351 pid=1425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 2 19:45:25.467000 audit[1427]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.467000 audit[1427]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffddb22bb10 a2=0 a3=7ffddb22bafc items=0 ppid=1351 pid=1427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.467000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 2 19:45:25.469000 audit[1429]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.469000 audit[1429]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcdacfe540 a2=0 a3=7ffcdacfe52c items=0 ppid=1351 pid=1429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.469000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 2 19:45:25.470000 audit[1431]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.470000 audit[1431]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc2886f8a0 a2=0 a3=7ffc2886f88c items=0 ppid=1351 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.470000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 2 19:45:25.477000 audit[1434]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.477000 audit[1434]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd17e0eea0 a2=0 a3=7ffd17e0ee8c items=0 ppid=1351 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.477000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 2 19:45:25.478000 audit[1436]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.478000 audit[1436]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc72b58f10 a2=0 a3=7ffc72b58efc items=0 ppid=1351 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.478000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 2 19:45:25.480000 audit[1438]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.480000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffe121e990 a2=0 a3=7fffe121e97c items=0 ppid=1351 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.480000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 2 19:45:25.481000 audit[1440]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.481000 audit[1440]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd564ab8c0 a2=0 a3=7ffd564ab8ac items=0 ppid=1351 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.481000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 2 19:45:25.482530 systemd-networkd[1065]: docker0: Link UP Oct 2 19:45:25.648000 audit[1444]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.648000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea44291f0 a2=0 a3=7ffea44291dc items=0 ppid=1351 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.648000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 2 19:45:25.649000 audit[1445]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:25.649000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe622edcc0 a2=0 a3=7ffe622edcac items=0 ppid=1351 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:25.649000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 2 19:45:25.650542 env[1351]: time="2023-10-02T19:45:25.650511945Z" level=info msg="Loading containers: done." Oct 2 19:45:25.658777 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2964783861-merged.mount: Deactivated successfully. Oct 2 19:45:25.744639 env[1351]: time="2023-10-02T19:45:25.744594137Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 2 19:45:25.744936 env[1351]: time="2023-10-02T19:45:25.744768913Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 2 19:45:25.744936 env[1351]: time="2023-10-02T19:45:25.744866699Z" level=info msg="Daemon has completed initialization" Oct 2 19:45:26.332960 systemd[1]: Started docker.service. Oct 2 19:45:26.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:26.340323 env[1351]: time="2023-10-02T19:45:26.340261591Z" level=info msg="API listen on /run/docker.sock" Oct 2 19:45:26.355631 systemd[1]: Reloading. Oct 2 19:45:26.409094 /usr/lib/systemd/system-generators/torcx-generator[1493]: time="2023-10-02T19:45:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:45:26.409123 /usr/lib/systemd/system-generators/torcx-generator[1493]: time="2023-10-02T19:45:26Z" level=info msg="torcx already run" Oct 2 19:45:26.472041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:45:26.472055 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:45:26.490187 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:45:26.549984 systemd[1]: Started kubelet.service. Oct 2 19:45:26.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:26.596942 kubelet[1540]: E1002 19:45:26.596839 1540 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:45:26.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:45:26.598658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:45:26.598834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:45:26.897760 env[1178]: time="2023-10-02T19:45:26.897661790Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.25.14\"" Oct 2 19:45:28.463885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988901350.mount: Deactivated successfully. Oct 2 19:45:30.342707 env[1178]: time="2023-10-02T19:45:30.342651212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:30.344238 env[1178]: time="2023-10-02T19:45:30.344195993Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:48f6f02f2e904b54753777e0487169939971458e169171892d46ca1579632d3f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:30.345805 env[1178]: time="2023-10-02T19:45:30.345760379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:30.347093 env[1178]: time="2023-10-02T19:45:30.347066628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:30.347765 env[1178]: time="2023-10-02T19:45:30.347720760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.25.14\" returns image reference \"sha256:48f6f02f2e904b54753777e0487169939971458e169171892d46ca1579632d3f\"" Oct 2 19:45:30.355897 env[1178]: time="2023-10-02T19:45:30.355841009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.25.14\"" Oct 2 19:45:32.703503 env[1178]: time="2023-10-02T19:45:32.703432504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:32.705861 env[1178]: time="2023-10-02T19:45:32.705828838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2fdc9124e4ab3b396594897e15b74bfe70445ab8f9340ad81af657f2c971118f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:32.707693 env[1178]: time="2023-10-02T19:45:32.707646564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:32.709458 env[1178]: time="2023-10-02T19:45:32.709425694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:32.710137 env[1178]: time="2023-10-02T19:45:32.710103622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.25.14\" returns image reference \"sha256:2fdc9124e4ab3b396594897e15b74bfe70445ab8f9340ad81af657f2c971118f\"" Oct 2 19:45:32.718437 env[1178]: time="2023-10-02T19:45:32.718397986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.25.14\"" Oct 2 19:45:35.534082 env[1178]: time="2023-10-02T19:45:35.533993344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:35.536947 env[1178]: time="2023-10-02T19:45:35.536890876Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:62a4b43588914100df65a99bb32d7c829fcc21b747a4ae400801eda76994ec7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:35.539984 env[1178]: time="2023-10-02T19:45:35.539908290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:35.542000 env[1178]: time="2023-10-02T19:45:35.541969567Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:35.542931 env[1178]: time="2023-10-02T19:45:35.542873995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.25.14\" returns image reference \"sha256:62a4b43588914100df65a99bb32d7c829fcc21b747a4ae400801eda76994ec7a\"" Oct 2 19:45:35.553957 env[1178]: time="2023-10-02T19:45:35.553898196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:45:36.563721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970661191.mount: Deactivated successfully. Oct 2 19:45:36.651921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 2 19:45:36.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:36.652101 systemd[1]: Stopped kubelet.service. Oct 2 19:45:36.652856 kernel: kauditd_printk_skb: 65 callbacks suppressed Oct 2 19:45:36.652904 kernel: audit: type=1130 audit(1696275936.651:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:36.654051 systemd[1]: Started kubelet.service. Oct 2 19:45:36.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:36.657105 kernel: audit: type=1131 audit(1696275936.651:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:36.657174 kernel: audit: type=1130 audit(1696275936.653:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:36.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:36.686157 kubelet[1584]: E1002 19:45:36.686114 1584 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:45:36.688763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:45:36.688902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:45:36.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:45:36.691809 kernel: audit: type=1131 audit(1696275936.688:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:45:40.191559 env[1178]: time="2023-10-02T19:45:40.191491209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.193501 env[1178]: time="2023-10-02T19:45:40.193472572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.195073 env[1178]: time="2023-10-02T19:45:40.195050823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.196497 env[1178]: time="2023-10-02T19:45:40.196455890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.196760 env[1178]: time="2023-10-02T19:45:40.196727019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:45:40.204441 env[1178]: time="2023-10-02T19:45:40.204400740Z" level=info msg="PullImage \"registry.k8s.io/pause:3.8\"" Oct 2 19:45:40.693000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430143657.mount: Deactivated successfully. Oct 2 19:45:40.726961 env[1178]: time="2023-10-02T19:45:40.726906390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.728664 env[1178]: time="2023-10-02T19:45:40.728639315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.730239 env[1178]: time="2023-10-02T19:45:40.730216496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.731592 env[1178]: time="2023-10-02T19:45:40.731561814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:40.732035 env[1178]: time="2023-10-02T19:45:40.732011595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.8\" returns image reference \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"" Oct 2 19:45:40.739943 env[1178]: time="2023-10-02T19:45:40.739899579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Oct 2 19:45:42.485405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871398896.mount: Deactivated successfully. Oct 2 19:45:46.901880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 2 19:45:46.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:46.902052 systemd[1]: Stopped kubelet.service. Oct 2 19:45:46.903376 systemd[1]: Started kubelet.service. Oct 2 19:45:46.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:46.906258 kernel: audit: type=1130 audit(1696275946.901:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:46.906320 kernel: audit: type=1131 audit(1696275946.901:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:46.906343 kernel: audit: type=1130 audit(1696275946.902:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:46.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:46.942442 kubelet[1604]: E1002 19:45:46.942389 1604 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:45:46.944258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:45:46.944425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:45:46.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:45:46.948819 kernel: audit: type=1131 audit(1696275946.943:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:45:48.436041 env[1178]: time="2023-10-02T19:45:48.435939341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:48.552269 env[1178]: time="2023-10-02T19:45:48.552213478Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:48.648610 env[1178]: time="2023-10-02T19:45:48.648544329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:48.666666 env[1178]: time="2023-10-02T19:45:48.666614286Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:48.667395 env[1178]: time="2023-10-02T19:45:48.667356774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Oct 2 19:45:48.680007 env[1178]: time="2023-10-02T19:45:48.679965149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Oct 2 19:45:50.346488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590845097.mount: Deactivated successfully. Oct 2 19:45:51.191084 env[1178]: time="2023-10-02T19:45:51.190996154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:51.193648 env[1178]: time="2023-10-02T19:45:51.193598242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:51.195560 env[1178]: time="2023-10-02T19:45:51.195525804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:51.198498 env[1178]: time="2023-10-02T19:45:51.198118244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:51.198897 env[1178]: time="2023-10-02T19:45:51.198814002Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Oct 2 19:45:53.634415 systemd[1]: Stopped kubelet.service. Oct 2 19:45:53.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:53.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:53.638627 kernel: audit: type=1130 audit(1696275953.633:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:53.638677 kernel: audit: type=1131 audit(1696275953.634:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:53.648506 systemd[1]: Reloading. Oct 2 19:45:53.705107 /usr/lib/systemd/system-generators/torcx-generator[1713]: time="2023-10-02T19:45:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:45:53.705132 /usr/lib/systemd/system-generators/torcx-generator[1713]: time="2023-10-02T19:45:53Z" level=info msg="torcx already run" Oct 2 19:45:54.047467 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:45:54.047482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:45:54.067060 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:45:54.134076 systemd[1]: Started kubelet.service. Oct 2 19:45:54.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:54.136802 kernel: audit: type=1130 audit(1696275954.133:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:54.167302 update_engine[1166]: I1002 19:45:54.167238 1166 update_attempter.cc:505] Updating boot flags... Oct 2 19:45:54.169540 kubelet[1760]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:45:54.169942 kubelet[1760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:45:54.170021 kubelet[1760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:45:54.170200 kubelet[1760]: I1002 19:45:54.170166 1760 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:45:54.172025 kubelet[1760]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:45:54.172025 kubelet[1760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:45:54.172025 kubelet[1760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:45:54.459570 kubelet[1760]: I1002 19:45:54.459531 1760 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:45:54.459570 kubelet[1760]: I1002 19:45:54.459557 1760 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:45:54.459847 kubelet[1760]: I1002 19:45:54.459833 1760 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:45:54.463156 kubelet[1760]: I1002 19:45:54.463129 1760 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:45:54.463710 kubelet[1760]: E1002 19:45:54.463687 1760 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.466664 kubelet[1760]: I1002 19:45:54.466634 1760 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:45:54.466961 kubelet[1760]: I1002 19:45:54.466943 1760 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:45:54.467040 kubelet[1760]: I1002 19:45:54.467024 1760 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:45:54.467124 kubelet[1760]: I1002 19:45:54.467045 1760 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:45:54.467124 kubelet[1760]: I1002 19:45:54.467057 1760 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:45:54.467124 kubelet[1760]: I1002 19:45:54.467125 1760 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:45:54.469494 kubelet[1760]: I1002 19:45:54.469467 1760 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:45:54.469494 kubelet[1760]: I1002 19:45:54.469489 1760 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:45:54.469631 kubelet[1760]: I1002 19:45:54.469509 1760 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:45:54.469631 kubelet[1760]: I1002 19:45:54.469522 1760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:45:54.469930 kubelet[1760]: W1002 19:45:54.469899 1760 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.470020 kubelet[1760]: E1002 19:45:54.469948 1760 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.470020 kubelet[1760]: I1002 19:45:54.470006 1760 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:45:54.470251 kubelet[1760]: W1002 19:45:54.470237 1760 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:45:54.470515 kubelet[1760]: I1002 19:45:54.470496 1760 server.go:1175] "Started kubelet" Oct 2 19:45:54.473480 kubelet[1760]: W1002 19:45:54.473451 1760 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.473480 kubelet[1760]: E1002 19:45:54.473480 1760 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.474076 kubelet[1760]: I1002 19:45:54.474054 1760 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:45:54.474301 kubelet[1760]: E1002 19:45:54.474190 1760 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4b3febe1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 470480865, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 470480865, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.9:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.9:6443: connect: connection refused'(may retry after sleeping) Oct 2 19:45:54.474395 kubelet[1760]: E1002 19:45:54.474373 1760 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:45:54.474395 kubelet[1760]: E1002 19:45:54.474391 1760 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:45:54.480815 kernel: audit: type=1400 audit(1696275954.474:181): avc: denied { mac_admin } for pid=1760 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:45:54.480907 kernel: audit: type=1401 audit(1696275954.474:181): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:45:54.480928 kernel: audit: type=1300 audit(1696275954.474:181): arch=c000003e syscall=188 success=no exit=-22 a0=c000e7f8f0 a1=c000227ea8 a2=c000e7f8c0 a3=25 items=0 ppid=1 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.474000 audit[1760]: AVC avc: denied { mac_admin } for pid=1760 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:45:54.474000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:45:54.474000 audit[1760]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e7f8f0 a1=c000227ea8 a2=c000e7f8c0 a3=25 items=0 ppid=1 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.481117 kubelet[1760]: I1002 19:45:54.475646 1760 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:45:54.481117 kubelet[1760]: I1002 19:45:54.477843 1760 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:45:54.481117 kubelet[1760]: I1002 19:45:54.478059 1760 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:45:54.481117 kubelet[1760]: I1002 19:45:54.478138 1760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:45:54.481117 kubelet[1760]: I1002 19:45:54.478333 1760 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:45:54.481117 kubelet[1760]: E1002 19:45:54.478869 1760 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.481117 kubelet[1760]: W1002 19:45:54.479382 1760 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.481117 kubelet[1760]: E1002 19:45:54.479424 1760 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.481117 kubelet[1760]: I1002 19:45:54.479544 1760 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:45:54.481117 kubelet[1760]: E1002 19:45:54.480264 1760 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:54.485813 kernel: audit: type=1327 audit(1696275954.474:181): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:45:54.485859 kernel: audit: type=1400 audit(1696275954.477:182): avc: denied { mac_admin } for pid=1760 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:45:54.485877 kernel: audit: type=1401 audit(1696275954.477:182): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:45:54.474000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:45:54.477000 audit[1760]: AVC avc: denied { mac_admin } for pid=1760 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:45:54.477000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:45:54.486878 kernel: audit: type=1300 audit(1696275954.477:182): arch=c000003e syscall=188 success=no exit=-22 a0=c000bf7580 a1=c000227ec0 a2=c000e7f980 a3=25 items=0 ppid=1 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.477000 audit[1760]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bf7580 a1=c000227ec0 a2=c000e7f980 a3=25 items=0 ppid=1 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.477000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:45:54.487000 audit[1789]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.487000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe6f7b53b0 a2=0 a3=7ffe6f7b539c items=0 ppid=1760 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:45:54.488000 audit[1790]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.488000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffd892180 a2=0 a3=7ffffd89216c items=0 ppid=1760 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:45:54.490000 audit[1792]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.490000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe01a729b0 a2=0 a3=7ffe01a7299c items=0 ppid=1760 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:45:54.491000 audit[1794]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.491000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1acda670 a2=0 a3=7ffd1acda65c items=0 ppid=1760 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:45:54.496000 audit[1797]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.496000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff9c8b5bb0 a2=0 a3=7fff9c8b5b9c items=0 ppid=1760 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:45:54.497000 audit[1798]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.497000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3a490940 a2=0 a3=7ffe3a49092c items=0 ppid=1760 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:45:54.500000 audit[1801]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.500000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd71a63470 a2=0 a3=7ffd71a6345c items=0 ppid=1760 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:45:54.504000 audit[1806]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.504000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffcaf5657e0 a2=0 a3=7ffcaf5657cc items=0 ppid=1760 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:45:54.505000 audit[1807]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.505000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff77d6c570 a2=0 a3=7fff77d6c55c items=0 ppid=1760 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:45:54.506000 audit[1808]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.506000 audit[1808]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd863d4280 a2=0 a3=7ffd863d426c items=0 ppid=1760 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:45:54.507000 audit[1810]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1810 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.507000 audit[1810]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd033cfb50 a2=0 a3=7ffd033cfb3c items=0 ppid=1760 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:45:54.509000 audit[1812]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.509000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff0f106d70 a2=0 a3=7fff0f106d5c items=0 ppid=1760 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:45:54.511000 audit[1814]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.511000 audit[1814]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff5956ee00 a2=0 a3=7fff5956edec items=0 ppid=1760 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:45:54.513000 audit[1818]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.513000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffeb4948640 a2=0 a3=7ffeb494862c items=0 ppid=1760 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:45:54.515312 kubelet[1760]: I1002 19:45:54.515282 1760 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:45:54.515312 kubelet[1760]: I1002 19:45:54.515302 1760 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:45:54.515401 kubelet[1760]: I1002 19:45:54.515346 1760 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:45:54.516000 audit[1820]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.516000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fffd50fda30 a2=0 a3=7fffd50fda1c items=0 ppid=1760 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:45:54.517180 kubelet[1760]: I1002 19:45:54.517155 1760 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:45:54.516000 audit[1821]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.516000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9fb73b70 a2=0 a3=7fff9fb73b5c items=0 ppid=1760 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:45:54.517000 audit[1822]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=1822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.517000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeb0c5ec80 a2=0 a3=7ffeb0c5ec6c items=0 ppid=1760 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:45:54.518309 kubelet[1760]: I1002 19:45:54.518291 1760 policy_none.go:49] "None policy: Start" Oct 2 19:45:54.518952 kubelet[1760]: I1002 19:45:54.518937 1760 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:45:54.519019 kubelet[1760]: I1002 19:45:54.518957 1760 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:45:54.518000 audit[1823]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.518000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa4daae60 a2=0 a3=7fffa4daae4c items=0 ppid=1760 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:45:54.518000 audit[1824]: NETFILTER_CFG table=nat:44 family=10 entries=2 op=nft_register_chain pid=1824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.518000 audit[1824]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffff935fc00 a2=0 a3=7ffff935fbec items=0 ppid=1760 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:45:54.519000 audit[1825]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:45:54.519000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9d6e6750 a2=0 a3=7ffc9d6e673c items=0 ppid=1760 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:45:54.524084 kubelet[1760]: I1002 19:45:54.524058 1760 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:45:54.523000 audit[1760]: AVC avc: denied { mac_admin } for pid=1760 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:45:54.523000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:45:54.523000 audit[1760]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001274720 a1=c00124f410 a2=c0012746f0 a3=25 items=0 ppid=1 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.523000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:45:54.524336 kubelet[1760]: I1002 19:45:54.524121 1760 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:45:54.524336 kubelet[1760]: I1002 19:45:54.524298 1760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:45:54.525530 kubelet[1760]: E1002 19:45:54.525434 1760 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 2 19:45:54.525000 audit[1827]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.525000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdd46a29b0 a2=0 a3=7ffdd46a299c items=0 ppid=1760 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:45:54.526000 audit[1828]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.526000 audit[1828]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd83e7afe0 a2=0 a3=7ffd83e7afcc items=0 ppid=1760 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:45:54.527000 audit[1830]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.527000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffec4e0140 a2=0 a3=7fffec4e012c items=0 ppid=1760 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:45:54.528000 audit[1831]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.528000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcda67d5c0 a2=0 a3=7ffcda67d5ac items=0 ppid=1760 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:45:54.529000 audit[1832]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.529000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3bce42f0 a2=0 a3=7fff3bce42dc items=0 ppid=1760 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:45:54.530000 audit[1834]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.530000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe3402c380 a2=0 a3=7ffe3402c36c items=0 ppid=1760 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:45:54.532000 audit[1836]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.532000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff0bee6c00 a2=0 a3=7fff0bee6bec items=0 ppid=1760 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:45:54.534000 audit[1838]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.534000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe2cacba10 a2=0 a3=7ffe2cacb9fc items=0 ppid=1760 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:45:54.535000 audit[1840]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.535000 audit[1840]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe42491980 a2=0 a3=7ffe4249196c items=0 ppid=1760 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:45:54.537000 audit[1842]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.537000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffd630ee000 a2=0 a3=7ffd630edfec items=0 ppid=1760 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:45:54.539137 kubelet[1760]: I1002 19:45:54.539112 1760 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:45:54.539137 kubelet[1760]: I1002 19:45:54.539135 1760 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:45:54.539210 kubelet[1760]: I1002 19:45:54.539149 1760 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:45:54.539210 kubelet[1760]: E1002 19:45:54.539188 1760 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:45:54.539888 kubelet[1760]: W1002 19:45:54.539827 1760 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.539888 kubelet[1760]: E1002 19:45:54.539865 1760 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.539000 audit[1843]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.539000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8de8bbc0 a2=0 a3=7ffc8de8bbac items=0 ppid=1760 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:45:54.539000 audit[1846]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.539000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7b547730 a2=0 a3=7ffd7b54771c items=0 ppid=1760 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:45:54.540000 audit[1847]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:45:54.540000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3de24c80 a2=0 a3=7ffe3de24c6c items=0 ppid=1760 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:45:54.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:45:54.579636 kubelet[1760]: E1002 19:45:54.579588 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:54.580493 kubelet[1760]: I1002 19:45:54.580457 1760 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Oct 2 19:45:54.580862 kubelet[1760]: E1002 19:45:54.580837 1760 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Oct 2 19:45:54.640020 kubelet[1760]: I1002 19:45:54.639941 1760 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:45:54.640995 kubelet[1760]: I1002 19:45:54.640955 1760 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:45:54.642269 kubelet[1760]: I1002 19:45:54.641892 1760 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:45:54.642806 kubelet[1760]: I1002 19:45:54.642777 1760 status_manager.go:667] "Failed to get status for pod" podUID=e2d649c28e59aa456441e56e4e0afa1d pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.9:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.9:6443: connect: connection refused" Oct 2 19:45:54.643208 kubelet[1760]: I1002 19:45:54.643071 1760 status_manager.go:667] "Failed to get status for pod" podUID=9d0e2e5c9b81579e9514841512d0dba5 pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.9:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.9:6443: connect: connection refused" Oct 2 19:45:54.643913 kubelet[1760]: I1002 19:45:54.643894 1760 status_manager.go:667] "Failed to get status for pod" podUID=c93b09bfbcf18533230bb5ccb3fad651 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.9:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.9:6443: connect: connection refused" Oct 2 19:45:54.679461 kubelet[1760]: E1002 19:45:54.679407 1760 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:54.680564 kubelet[1760]: E1002 19:45:54.680519 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:54.780915 kubelet[1760]: E1002 19:45:54.780836 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:54.780972 kubelet[1760]: I1002 19:45:54.780959 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:45:54.781019 kubelet[1760]: I1002 19:45:54.780994 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:45:54.781092 kubelet[1760]: I1002 19:45:54.781056 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:45:54.781146 kubelet[1760]: I1002 19:45:54.781137 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c93b09bfbcf18533230bb5ccb3fad651-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c93b09bfbcf18533230bb5ccb3fad651\") " pod="kube-system/kube-scheduler-localhost" Oct 2 19:45:54.781188 kubelet[1760]: I1002 19:45:54.781183 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2d649c28e59aa456441e56e4e0afa1d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e2d649c28e59aa456441e56e4e0afa1d\") " pod="kube-system/kube-apiserver-localhost" Oct 2 19:45:54.781241 kubelet[1760]: I1002 19:45:54.781221 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2d649c28e59aa456441e56e4e0afa1d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e2d649c28e59aa456441e56e4e0afa1d\") " pod="kube-system/kube-apiserver-localhost" Oct 2 19:45:54.781294 kubelet[1760]: I1002 19:45:54.781271 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2d649c28e59aa456441e56e4e0afa1d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e2d649c28e59aa456441e56e4e0afa1d\") " pod="kube-system/kube-apiserver-localhost" Oct 2 19:45:54.781329 kubelet[1760]: I1002 19:45:54.781308 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:45:54.781362 kubelet[1760]: I1002 19:45:54.781343 1760 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:45:54.783014 kubelet[1760]: I1002 19:45:54.782973 1760 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Oct 2 19:45:54.783455 kubelet[1760]: E1002 19:45:54.783435 1760 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Oct 2 19:45:54.880935 kubelet[1760]: E1002 19:45:54.880904 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:54.945331 kubelet[1760]: E1002 19:45:54.945310 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:54.945851 env[1178]: time="2023-10-02T19:45:54.945810943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e2d649c28e59aa456441e56e4e0afa1d,Namespace:kube-system,Attempt:0,}" Oct 2 19:45:54.947983 kubelet[1760]: E1002 19:45:54.947968 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:54.948129 kubelet[1760]: E1002 19:45:54.948113 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:54.948323 env[1178]: time="2023-10-02T19:45:54.948295906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9d0e2e5c9b81579e9514841512d0dba5,Namespace:kube-system,Attempt:0,}" Oct 2 19:45:54.948377 env[1178]: time="2023-10-02T19:45:54.948328560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c93b09bfbcf18533230bb5ccb3fad651,Namespace:kube-system,Attempt:0,}" Oct 2 19:45:54.981799 kubelet[1760]: E1002 19:45:54.981773 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.080598 kubelet[1760]: E1002 19:45:55.080502 1760 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:55.082601 kubelet[1760]: E1002 19:45:55.082576 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.183078 kubelet[1760]: E1002 19:45:55.183036 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.184951 kubelet[1760]: I1002 19:45:55.184929 1760 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Oct 2 19:45:55.187553 kubelet[1760]: E1002 19:45:55.187527 1760 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Oct 2 19:45:55.283765 kubelet[1760]: E1002 19:45:55.283716 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.384245 kubelet[1760]: E1002 19:45:55.384144 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.440431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236268062.mount: Deactivated successfully. Oct 2 19:45:55.445819 env[1178]: time="2023-10-02T19:45:55.445751891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.446687 env[1178]: time="2023-10-02T19:45:55.446663236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.449055 env[1178]: time="2023-10-02T19:45:55.448900091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.449995 env[1178]: time="2023-10-02T19:45:55.449956448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.450868 env[1178]: time="2023-10-02T19:45:55.450838527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.452309 env[1178]: time="2023-10-02T19:45:55.452272666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.453672 env[1178]: time="2023-10-02T19:45:55.453643863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.455064 env[1178]: time="2023-10-02T19:45:55.455039097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.457067 env[1178]: time="2023-10-02T19:45:55.457009295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.459719 env[1178]: time="2023-10-02T19:45:55.459671393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.460625 env[1178]: time="2023-10-02T19:45:55.460603809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.463469 env[1178]: time="2023-10-02T19:45:55.463427371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:45:55.484266 kubelet[1760]: E1002 19:45:55.484227 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.486820 env[1178]: time="2023-10-02T19:45:55.486713963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:45:55.486820 env[1178]: time="2023-10-02T19:45:55.486746857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:45:55.487021 env[1178]: time="2023-10-02T19:45:55.486777727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:45:55.487122 env[1178]: time="2023-10-02T19:45:55.487090432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b9f89028de4fe4c9361bb088bf141f9dac5d8b9737787e150bbd6cccf51097 pid=1872 runtime=io.containerd.runc.v2 Oct 2 19:45:55.487205 env[1178]: time="2023-10-02T19:45:55.486973025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:45:55.487205 env[1178]: time="2023-10-02T19:45:55.487024284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:45:55.487205 env[1178]: time="2023-10-02T19:45:55.487059814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:45:55.487366 env[1178]: time="2023-10-02T19:45:55.487213461Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af61be2910777eb4b6c27d003c2f5b8aa1c6a95e6e3d5677f28da1476cbf8583 pid=1870 runtime=io.containerd.runc.v2 Oct 2 19:45:55.487444 env[1178]: time="2023-10-02T19:45:55.486609610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:45:55.487444 env[1178]: time="2023-10-02T19:45:55.486664147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:45:55.487444 env[1178]: time="2023-10-02T19:45:55.486677543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:45:55.487892 env[1178]: time="2023-10-02T19:45:55.487773947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8baf681c9234fa01caa06ccfbaccfe6bcdfdd686557b9585a585704fc03fd247 pid=1871 runtime=io.containerd.runc.v2 Oct 2 19:45:55.510840 kubelet[1760]: W1002 19:45:55.509725 1760 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:55.510840 kubelet[1760]: E1002 19:45:55.509803 1760 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:55.544049 env[1178]: time="2023-10-02T19:45:55.543996750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c93b09bfbcf18533230bb5ccb3fad651,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b9f89028de4fe4c9361bb088bf141f9dac5d8b9737787e150bbd6cccf51097\"" Oct 2 19:45:55.544888 env[1178]: time="2023-10-02T19:45:55.544837799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9d0e2e5c9b81579e9514841512d0dba5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8baf681c9234fa01caa06ccfbaccfe6bcdfdd686557b9585a585704fc03fd247\"" Oct 2 19:45:55.545082 kubelet[1760]: E1002 19:45:55.545057 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:55.546211 kubelet[1760]: E1002 19:45:55.546188 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:55.546939 env[1178]: time="2023-10-02T19:45:55.546914313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e2d649c28e59aa456441e56e4e0afa1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"af61be2910777eb4b6c27d003c2f5b8aa1c6a95e6e3d5677f28da1476cbf8583\"" Oct 2 19:45:55.547963 env[1178]: time="2023-10-02T19:45:55.547931544Z" level=info msg="CreateContainer within sandbox \"35b9f89028de4fe4c9361bb088bf141f9dac5d8b9737787e150bbd6cccf51097\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 2 19:45:55.548526 kubelet[1760]: E1002 19:45:55.548507 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:55.551025 env[1178]: time="2023-10-02T19:45:55.550991694Z" level=info msg="CreateContainer within sandbox \"8baf681c9234fa01caa06ccfbaccfe6bcdfdd686557b9585a585704fc03fd247\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 2 19:45:55.551292 env[1178]: time="2023-10-02T19:45:55.551255835Z" level=info msg="CreateContainer within sandbox \"af61be2910777eb4b6c27d003c2f5b8aa1c6a95e6e3d5677f28da1476cbf8583\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 2 19:45:55.563679 kubelet[1760]: W1002 19:45:55.563613 1760 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:55.563679 kubelet[1760]: E1002 19:45:55.563673 1760 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Oct 2 19:45:55.576496 env[1178]: time="2023-10-02T19:45:55.576443752Z" level=info msg="CreateContainer within sandbox \"35b9f89028de4fe4c9361bb088bf141f9dac5d8b9737787e150bbd6cccf51097\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2714453211f61c1c00492a1c521306d75e7d970d00179f8a831a25c7799b346\"" Oct 2 19:45:55.577154 env[1178]: time="2023-10-02T19:45:55.577122428Z" level=info msg="StartContainer for \"e2714453211f61c1c00492a1c521306d75e7d970d00179f8a831a25c7799b346\"" Oct 2 19:45:55.580032 env[1178]: time="2023-10-02T19:45:55.579980144Z" level=info msg="CreateContainer within sandbox \"8baf681c9234fa01caa06ccfbaccfe6bcdfdd686557b9585a585704fc03fd247\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6\"" Oct 2 19:45:55.580446 env[1178]: time="2023-10-02T19:45:55.580413484Z" level=info msg="StartContainer for \"ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6\"" Oct 2 19:45:55.581884 env[1178]: time="2023-10-02T19:45:55.581844427Z" level=info msg="CreateContainer within sandbox \"af61be2910777eb4b6c27d003c2f5b8aa1c6a95e6e3d5677f28da1476cbf8583\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40d190a573fad1da38fa2f722b5d66359788657ebcecf736408da961d120bee6\"" Oct 2 19:45:55.582171 env[1178]: time="2023-10-02T19:45:55.582138527Z" level=info msg="StartContainer for \"40d190a573fad1da38fa2f722b5d66359788657ebcecf736408da961d120bee6\"" Oct 2 19:45:55.585364 kubelet[1760]: E1002 19:45:55.585336 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.633984 env[1178]: time="2023-10-02T19:45:55.633940765Z" level=info msg="StartContainer for \"e2714453211f61c1c00492a1c521306d75e7d970d00179f8a831a25c7799b346\" returns successfully" Oct 2 19:45:55.653046 env[1178]: time="2023-10-02T19:45:55.652965288Z" level=info msg="StartContainer for \"40d190a573fad1da38fa2f722b5d66359788657ebcecf736408da961d120bee6\" returns successfully" Oct 2 19:45:55.658725 env[1178]: time="2023-10-02T19:45:55.658675382Z" level=info msg="StartContainer for \"ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6\" returns successfully" Oct 2 19:45:55.685936 kubelet[1760]: E1002 19:45:55.685873 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.786425 kubelet[1760]: E1002 19:45:55.786350 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.886725 kubelet[1760]: E1002 19:45:55.886687 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.987379 kubelet[1760]: E1002 19:45:55.987236 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:55.989207 kubelet[1760]: I1002 19:45:55.989191 1760 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Oct 2 19:45:56.088033 kubelet[1760]: E1002 19:45:56.087996 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.189069 kubelet[1760]: E1002 19:45:56.189024 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.289643 kubelet[1760]: E1002 19:45:56.289540 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.390077 kubelet[1760]: E1002 19:45:56.390029 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.490390 kubelet[1760]: E1002 19:45:56.490341 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.544374 kubelet[1760]: E1002 19:45:56.544276 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:56.546010 kubelet[1760]: E1002 19:45:56.545992 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:56.547594 kubelet[1760]: E1002 19:45:56.547572 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:56.591151 kubelet[1760]: E1002 19:45:56.591094 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.691588 kubelet[1760]: E1002 19:45:56.691538 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.791946 kubelet[1760]: E1002 19:45:56.791900 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.892555 kubelet[1760]: E1002 19:45:56.892448 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:56.993126 kubelet[1760]: E1002 19:45:56.993052 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.093497 kubelet[1760]: E1002 19:45:57.093448 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.194020 kubelet[1760]: E1002 19:45:57.193962 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.294465 kubelet[1760]: E1002 19:45:57.294409 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.385499 kubelet[1760]: E1002 19:45:57.385466 1760 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:45:57.395250 kubelet[1760]: E1002 19:45:57.395217 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.468859 kubelet[1760]: I1002 19:45:57.468733 1760 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Oct 2 19:45:57.495642 kubelet[1760]: E1002 19:45:57.495596 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.549459 kubelet[1760]: E1002 19:45:57.549410 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:57.549618 kubelet[1760]: E1002 19:45:57.549530 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:57.549742 kubelet[1760]: E1002 19:45:57.549721 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:57.596538 kubelet[1760]: E1002 19:45:57.596510 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.696946 kubelet[1760]: E1002 19:45:57.696911 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.771294 kubelet[1760]: E1002 19:45:57.771145 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4b3febe1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 470480865, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 470480865, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:57.797424 kubelet[1760]: E1002 19:45:57.797386 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.824609 kubelet[1760]: E1002 19:45:57.824535 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4b7b7311", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 474382097, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 474382097, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:57.877453 kubelet[1760]: E1002 19:45:57.877365 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4de21f57", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514665303, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514665303, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:57.897745 kubelet[1760]: E1002 19:45:57.897703 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:57.930888 kubelet[1760]: E1002 19:45:57.930815 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4de25621", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514679329, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514679329, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:57.984170 kubelet[1760]: E1002 19:45:57.984072 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4de26439", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514682937, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514682937, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:57.998271 kubelet[1760]: E1002 19:45:57.998238 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:58.037865 kubelet[1760]: E1002 19:45:58.037716 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4e7f5b22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 524969762, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 524969762, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:58.093258 kubelet[1760]: E1002 19:45:58.093156 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4de21f57", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514665303, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 580405734, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:58.099074 kubelet[1760]: E1002 19:45:58.099032 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:58.147784 kubelet[1760]: E1002 19:45:58.147700 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4de25621", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514679329, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 580418690, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:58.199318 kubelet[1760]: E1002 19:45:58.199264 1760 kubelet.go:2448] "Error getting node" err="node \"localhost\" not found" Oct 2 19:45:58.203482 kubelet[1760]: E1002 19:45:58.203393 1760 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61fe4de26439", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 514682937, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 45, 54, 580422887, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Oct 2 19:45:58.475276 kubelet[1760]: I1002 19:45:58.475235 1760 apiserver.go:52] "Watching apiserver" Oct 2 19:45:58.501487 kubelet[1760]: I1002 19:45:58.501432 1760 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:45:58.555652 kubelet[1760]: E1002 19:45:58.555619 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:59.525687 kubelet[1760]: E1002 19:45:59.525662 1760 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:59.550716 kubelet[1760]: E1002 19:45:59.550683 1760 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:59.701528 systemd[1]: Reloading. Oct 2 19:45:59.759833 /usr/lib/systemd/system-generators/torcx-generator[2121]: time="2023-10-02T19:45:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:45:59.759862 /usr/lib/systemd/system-generators/torcx-generator[2121]: time="2023-10-02T19:45:59Z" level=info msg="torcx already run" Oct 2 19:45:59.828452 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:45:59.828468 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:45:59.846622 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:45:59.919243 systemd[1]: Stopping kubelet.service... Oct 2 19:45:59.919912 kubelet[1760]: I1002 19:45:59.919872 1760 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:45:59.940364 systemd[1]: kubelet.service: Deactivated successfully. Oct 2 19:45:59.940694 systemd[1]: Stopped kubelet.service. Oct 2 19:45:59.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:59.941494 kernel: kauditd_printk_skb: 104 callbacks suppressed Oct 2 19:45:59.941539 kernel: audit: type=1131 audit(1696275959.939:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:59.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:59.942549 systemd[1]: Started kubelet.service. Oct 2 19:45:59.946233 kernel: audit: type=1130 audit(1696275959.941:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:45:59.994588 kubelet[2169]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:45:59.994994 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:45:59.995064 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:45:59.995223 kubelet[2169]: I1002 19:45:59.995188 2169 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:45:59.996869 kubelet[2169]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:45:59.996941 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:45:59.997010 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:45:59.999494 kubelet[2169]: I1002 19:45:59.999460 2169 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:45:59.999539 kubelet[2169]: I1002 19:45:59.999508 2169 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:45:59.999769 kubelet[2169]: I1002 19:45:59.999749 2169 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:46:00.000927 kubelet[2169]: I1002 19:46:00.000910 2169 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 2 19:46:00.001623 kubelet[2169]: I1002 19:46:00.001595 2169 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:46:00.005988 kubelet[2169]: I1002 19:46:00.005967 2169 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:46:00.006351 kubelet[2169]: I1002 19:46:00.006331 2169 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:46:00.006634 kubelet[2169]: I1002 19:46:00.006616 2169 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:46:00.006709 kubelet[2169]: I1002 19:46:00.006638 2169 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:46:00.006709 kubelet[2169]: I1002 19:46:00.006648 2169 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:46:00.006709 kubelet[2169]: I1002 19:46:00.006677 2169 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:46:00.010942 kubelet[2169]: I1002 19:46:00.010897 2169 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:46:00.011110 kubelet[2169]: I1002 19:46:00.011088 2169 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:46:00.011166 kubelet[2169]: I1002 19:46:00.011116 2169 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:46:00.011166 kubelet[2169]: I1002 19:46:00.011139 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:46:00.014102 kubelet[2169]: I1002 19:46:00.014066 2169 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:46:00.014484 kubelet[2169]: I1002 19:46:00.014473 2169 server.go:1175] "Started kubelet" Oct 2 19:46:00.015584 kubelet[2169]: I1002 19:46:00.015561 2169 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:46:00.016077 kubelet[2169]: I1002 19:46:00.016058 2169 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:46:00.016000 audit[2169]: AVC avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:00.017532 kubelet[2169]: I1002 19:46:00.017453 2169 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:46:00.017532 kubelet[2169]: I1002 19:46:00.017478 2169 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:46:00.017532 kubelet[2169]: I1002 19:46:00.017496 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:46:00.016000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:46:00.022340 kernel: audit: type=1400 audit(1696275960.016:219): avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:00.022379 kernel: audit: type=1401 audit(1696275960.016:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:46:00.022418 kernel: audit: type=1300 audit(1696275960.016:219): arch=c000003e syscall=188 success=no exit=-22 a0=c000b621b0 a1=c00064c738 a2=c000b62180 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:00.016000 audit[2169]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b621b0 a1=c00064c738 a2=c000b62180 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:00.016000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:46:00.026434 kubelet[2169]: E1002 19:46:00.026402 2169 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:46:00.026492 kubelet[2169]: E1002 19:46:00.026440 2169 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:46:00.026979 kernel: audit: type=1327 audit(1696275960.016:219): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:46:00.029171 kernel: audit: type=1400 audit(1696275960.016:220): avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:00.016000 audit[2169]: AVC avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:00.029279 kubelet[2169]: I1002 19:46:00.027459 2169 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:46:00.030448 kernel: audit: type=1401 audit(1696275960.016:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:46:00.016000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:46:00.030934 kubelet[2169]: I1002 19:46:00.030613 2169 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:46:00.016000 audit[2169]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007ba840 a1=c00064c750 a2=c000b62240 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:00.032361 kubelet[2169]: E1002 19:46:00.031541 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:00.036036 kernel: audit: type=1300 audit(1696275960.016:220): arch=c000003e syscall=188 success=no exit=-22 a0=c0007ba840 a1=c00064c750 a2=c000b62240 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:00.036157 kernel: audit: type=1327 audit(1696275960.016:220): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:46:00.016000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:46:00.056598 kubelet[2169]: I1002 19:46:00.056569 2169 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:46:00.066029 kubelet[2169]: I1002 19:46:00.066006 2169 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:46:00.066029 kubelet[2169]: I1002 19:46:00.066028 2169 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:46:00.066126 kubelet[2169]: I1002 19:46:00.066045 2169 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:46:00.066126 kubelet[2169]: E1002 19:46:00.066091 2169 kubelet.go:2034] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 2 19:46:00.084000 audit[2225]: USER_ACCT pid=2225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:00.084000 audit[2225]: CRED_REFR pid=2225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:00.084976 sudo[2225]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 2 19:46:00.086000 audit[2225]: USER_START pid=2225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:00.085128 sudo[2225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 2 19:46:00.088188 kubelet[2169]: I1002 19:46:00.088086 2169 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:46:00.088188 kubelet[2169]: I1002 19:46:00.088121 2169 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:46:00.088188 kubelet[2169]: I1002 19:46:00.088136 2169 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:46:00.088299 kubelet[2169]: I1002 19:46:00.088285 2169 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 2 19:46:00.088325 kubelet[2169]: I1002 19:46:00.088301 2169 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Oct 2 19:46:00.088325 kubelet[2169]: I1002 19:46:00.088307 2169 policy_none.go:49] "None policy: Start" Oct 2 19:46:00.088724 kubelet[2169]: I1002 19:46:00.088699 2169 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:46:00.088724 kubelet[2169]: I1002 19:46:00.088716 2169 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:46:00.088853 kubelet[2169]: I1002 19:46:00.088831 2169 state_mem.go:75] "Updated machine memory state" Oct 2 19:46:00.090000 kubelet[2169]: I1002 19:46:00.089984 2169 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:46:00.089000 audit[2169]: AVC avc: denied { mac_admin } for pid=2169 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:00.089000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:46:00.089000 audit[2169]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0013f1680 a1=c0013eed20 a2=c0013f1650 a3=25 items=0 ppid=1 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:00.089000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:46:00.090187 kubelet[2169]: I1002 19:46:00.090033 2169 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:46:00.090187 kubelet[2169]: I1002 19:46:00.090178 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:46:00.131737 kubelet[2169]: I1002 19:46:00.131711 2169 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Oct 2 19:46:00.138299 kubelet[2169]: I1002 19:46:00.138272 2169 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Oct 2 19:46:00.139012 kubelet[2169]: I1002 19:46:00.138332 2169 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Oct 2 19:46:00.166287 kubelet[2169]: I1002 19:46:00.166245 2169 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:46:00.166428 kubelet[2169]: I1002 19:46:00.166327 2169 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:46:00.166428 kubelet[2169]: I1002 19:46:00.166349 2169 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:46:00.170403 kubelet[2169]: E1002 19:46:00.170390 2169 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 2 19:46:00.331582 kubelet[2169]: I1002 19:46:00.331531 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2d649c28e59aa456441e56e4e0afa1d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e2d649c28e59aa456441e56e4e0afa1d\") " pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:00.331582 kubelet[2169]: I1002 19:46:00.331586 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:00.331760 kubelet[2169]: I1002 19:46:00.331605 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:00.331760 kubelet[2169]: I1002 19:46:00.331699 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:00.331760 kubelet[2169]: I1002 19:46:00.331743 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:00.331863 kubelet[2169]: I1002 19:46:00.331763 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c93b09bfbcf18533230bb5ccb3fad651-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c93b09bfbcf18533230bb5ccb3fad651\") " pod="kube-system/kube-scheduler-localhost" Oct 2 19:46:00.331863 kubelet[2169]: I1002 19:46:00.331802 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2d649c28e59aa456441e56e4e0afa1d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e2d649c28e59aa456441e56e4e0afa1d\") " pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:00.331863 kubelet[2169]: I1002 19:46:00.331841 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2d649c28e59aa456441e56e4e0afa1d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e2d649c28e59aa456441e56e4e0afa1d\") " pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:00.331936 kubelet[2169]: I1002 19:46:00.331883 2169 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d0e2e5c9b81579e9514841512d0dba5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9d0e2e5c9b81579e9514841512d0dba5\") " pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:00.471660 kubelet[2169]: E1002 19:46:00.471622 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:00.516252 kubelet[2169]: E1002 19:46:00.516214 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:00.520466 sudo[2225]: pam_unix(sudo:session): session closed for user root Oct 2 19:46:00.519000 audit[2225]: USER_END pid=2225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:00.519000 audit[2225]: CRED_DISP pid=2225 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:00.616538 kubelet[2169]: E1002 19:46:00.616481 2169 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:00.617117 kubelet[2169]: E1002 19:46:00.617096 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:01.012857 kubelet[2169]: I1002 19:46:01.012797 2169 apiserver.go:52] "Watching apiserver" Oct 2 19:46:01.036651 kubelet[2169]: I1002 19:46:01.036580 2169 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:46:01.071964 kubelet[2169]: I1002 19:46:01.071932 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-controller-manager-localhost" podUID=7f653a14-495b-45c5-84c2-5442011192b8 Oct 2 19:46:01.347000 audit[1332]: USER_END pid=1332 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:01.347000 audit[1332]: CRED_DISP pid=1332 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:46:01.348503 sudo[1332]: pam_unix(sudo:session): session closed for user root Oct 2 19:46:01.349563 sshd[1326]: pam_unix(sshd:session): session closed for user core Oct 2 19:46:01.349000 audit[1326]: USER_END pid=1326 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:46:01.349000 audit[1326]: CRED_DISP pid=1326 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:46:01.351552 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:54488.service: Deactivated successfully. Oct 2 19:46:01.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.9:22-10.0.0.1:54488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:46:01.352493 systemd-logind[1162]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:46:01.352525 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:46:01.353215 systemd-logind[1162]: Removed session 7. Oct 2 19:46:01.518067 kubelet[2169]: I1002 19:46:01.517995 2169 kubelet.go:1699] "Deleted mirror pod because it is outdated" pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:01.638955 kubelet[2169]: E1002 19:46:01.638831 2169 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 2 19:46:01.639209 kubelet[2169]: E1002 19:46:01.639172 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:01.884685 kubelet[2169]: E1002 19:46:01.884639 2169 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:01.885323 kubelet[2169]: E1002 19:46:01.885294 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:02.014621 kubelet[2169]: I1002 19:46:02.014582 2169 status_manager.go:691] "Failed to update status for pod" pod="kube-system/kube-controller-manager-localhost" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f653a14-495b-45c5-84c2-5442011192b8\\\"},\\\"status\\\":{\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"10.0.0.9\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6\\\",\\\"image\\\":\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2023-10-02T19:45:55Z\\\"}}}],\\\"podIP\\\":\\\"10.0.0.9\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"10.0.0.9\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"10.0.0.10\\\"}]}}\" for pod \"kube-system\"/\"kube-controller-manager-localhost\": pods \"kube-controller-manager-localhost\" not found" Oct 2 19:46:02.073548 kubelet[2169]: I1002 19:46:02.073519 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-scheduler-localhost" podUID=1bf0a39f-c367-4634-8a64-8e7670a849b9 Oct 2 19:46:02.073877 kubelet[2169]: I1002 19:46:02.073832 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=104de83d-729b-4cdd-a816-34b9a72f9591 Oct 2 19:46:02.217151 kubelet[2169]: E1002 19:46:02.217117 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:02.700091 kubelet[2169]: I1002 19:46:02.700042 2169 kubelet.go:1699] "Deleted mirror pod because it is outdated" pod="kube-system/kube-scheduler-localhost" Oct 2 19:46:02.816846 kubelet[2169]: I1002 19:46:02.816810 2169 kubelet.go:1699] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:03.029945 kubelet[2169]: I1002 19:46:03.029835 2169 status_manager.go:691] "Failed to update status for pod" pod="kube-system/kube-scheduler-localhost" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf0a39f-c367-4634-8a64-8e7670a849b9\\\"},\\\"status\\\":{\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"10.0.0.9\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://e2714453211f61c1c00492a1c521306d75e7d970d00179f8a831a25c7799b346\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2023-10-02T19:45:55Z\\\"}}}],\\\"podIP\\\":\\\"10.0.0.9\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"10.0.0.9\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"10.0.0.10\\\"}]}}\" for pod \"kube-system\"/\"kube-scheduler-localhost\": pods \"kube-scheduler-localhost\" not found" Oct 2 19:46:03.074960 kubelet[2169]: E1002 19:46:03.074933 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:03.216096 kubelet[2169]: E1002 19:46:03.216052 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:03.415591 kubelet[2169]: E1002 19:46:03.415556 2169 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:03.416033 kubelet[2169]: E1002 19:46:03.416016 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:04.075658 kubelet[2169]: I1002 19:46:04.075617 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=2bf24285-b7f4-4e33-93c9-0dca64603bbd Oct 2 19:46:04.076104 kubelet[2169]: I1002 19:46:04.075751 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-controller-manager-localhost" podUID=729fab53-7208-435a-9168-536f20b1d696 Oct 2 19:46:04.076104 kubelet[2169]: E1002 19:46:04.075979 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:04.438756 kubelet[2169]: I1002 19:46:04.438723 2169 kubelet.go:1699] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:04.614160 kubelet[2169]: E1002 19:46:04.614102 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:05.015812 kubelet[2169]: E1002 19:46:05.015766 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:05.077844 kubelet[2169]: I1002 19:46:05.077809 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-scheduler-localhost" podUID=2e50c654-e31c-4bad-b1bb-708c91bb777f Oct 2 19:46:05.077844 kubelet[2169]: I1002 19:46:05.077809 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-controller-manager-localhost" podUID=6f06c254-3f34-47b5-b19b-e3aa82593689 Oct 2 19:46:05.078460 kubelet[2169]: E1002 19:46:05.078435 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:05.090774 kubelet[2169]: E1002 19:46:05.090745 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:05.417559 kubelet[2169]: I1002 19:46:05.417510 2169 kubelet.go:1699] "Deleted mirror pod because it is outdated" pod="kube-system/kube-scheduler-localhost" Oct 2 19:46:05.618391 kubelet[2169]: I1002 19:46:05.618352 2169 kubelet.go:1699] "Deleted mirror pod because it is outdated" pod="kube-system/kube-controller-manager-localhost" Oct 2 19:46:05.814894 kubelet[2169]: I1002 19:46:05.814765 2169 status_manager.go:691] "Failed to update status for pod" pod="kube-system/kube-scheduler-localhost" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e50c654-e31c-4bad-b1bb-708c91bb777f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2023-10-02T19:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2023-10-02T19:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2023-10-02T19:45:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2023-10-02T19:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://e2714453211f61c1c00492a1c521306d75e7d970d00179f8a831a25c7799b346\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2023-10-02T19:45:55Z\\\"}}}],\\\"hostIP\\\":\\\"10.0.0.10\\\",\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"10.0.0.9\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"10.0.0.9\\\"}],\\\"startTime\\\":\\\"2023-10-02T19:45:55Z\\\"}}\" for pod \"kube-system\"/\"kube-scheduler-localhost\": pods \"kube-scheduler-localhost\" not found" Oct 2 19:46:06.016484 kubelet[2169]: E1002 19:46:06.016431 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:06.079930 kubelet[2169]: E1002 19:46:06.079811 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:06.216163 kubelet[2169]: E1002 19:46:06.216122 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:06.406403 kubelet[2169]: I1002 19:46:06.406364 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=96ce9f2c-29b5-42d9-83d0-12460fe02321 Oct 2 19:46:06.767442 kubelet[2169]: E1002 19:46:06.767334 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:07.081800 kubelet[2169]: I1002 19:46:07.081691 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=96ce9f2c-29b5-42d9-83d0-12460fe02321 Oct 2 19:46:07.082374 kubelet[2169]: E1002 19:46:07.082351 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:07.084990 kubelet[2169]: E1002 19:46:07.084949 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:08.082896 kubelet[2169]: I1002 19:46:08.082867 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=c32568ce-3b6f-4ce7-bd33-13e99f8ca93e Oct 2 19:46:10.091742 kubelet[2169]: E1002 19:46:10.091585 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:10.317165 kubelet[2169]: E1002 19:46:10.317090 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:46:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:46:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:46:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:46:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.9:37422->10.0.0.2:2379: read: connection reset by peer" Oct 2 19:46:11.779782 kubelet[2169]: E1002 19:46:11.779680 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:12.421217 kubelet[2169]: E1002 19:46:12.421106 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:15.094482 kubelet[2169]: E1002 19:46:15.094356 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:18.837150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6-rootfs.mount: Deactivated successfully. Oct 2 19:46:19.390002 env[1178]: time="2023-10-02T19:46:19.388695977Z" level=info msg="shim disconnected" id=ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6 Oct 2 19:46:19.390002 env[1178]: time="2023-10-02T19:46:19.388758075Z" level=warning msg="cleaning up after shim disconnected" id=ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6 namespace=k8s.io Oct 2 19:46:19.390002 env[1178]: time="2023-10-02T19:46:19.388771641Z" level=info msg="cleaning up dead shim" Oct 2 19:46:19.408721 env[1178]: time="2023-10-02T19:46:19.408643220Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2287 runtime=io.containerd.runc.v2\n" Oct 2 19:46:20.095827 kubelet[2169]: E1002 19:46:20.095798 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:20.151120 kubelet[2169]: I1002 19:46:20.150488 2169 scope.go:115] "RemoveContainer" containerID="ea66272d37fd5ce50233c2169aeec60879cccb741f248774aad2289243de81e6" Oct 2 19:46:20.151120 kubelet[2169]: E1002 19:46:20.150563 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:20.157451 env[1178]: time="2023-10-02T19:46:20.156836214Z" level=info msg="CreateContainer within sandbox \"8baf681c9234fa01caa06ccfbaccfe6bcdfdd686557b9585a585704fc03fd247\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 2 19:46:20.169657 kubelet[2169]: E1002 19:46:20.169290 2169 controller.go:187] failed to update lease, error: Put "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:46:20.322399 kubelet[2169]: E1002 19:46:20.320894 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.9:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:46:20.645360 env[1178]: time="2023-10-02T19:46:20.642535904Z" level=info msg="CreateContainer within sandbox \"8baf681c9234fa01caa06ccfbaccfe6bcdfdd686557b9585a585704fc03fd247\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"53b27a5fafe2275a5a5e5cf7e1742aa535d0befecd724511aacd96cf19c40c86\"" Oct 2 19:46:20.645926 env[1178]: time="2023-10-02T19:46:20.645427206Z" level=info msg="StartContainer for \"53b27a5fafe2275a5a5e5cf7e1742aa535d0befecd724511aacd96cf19c40c86\"" Oct 2 19:46:20.959687 env[1178]: time="2023-10-02T19:46:20.959619149Z" level=info msg="StartContainer for \"53b27a5fafe2275a5a5e5cf7e1742aa535d0befecd724511aacd96cf19c40c86\" returns successfully" Oct 2 19:46:21.170592 kubelet[2169]: E1002 19:46:21.170531 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:22.172492 kubelet[2169]: E1002 19:46:22.172460 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:25.104101 kubelet[2169]: E1002 19:46:25.097835 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:30.110582 kubelet[2169]: E1002 19:46:30.110049 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:30.171540 kubelet[2169]: E1002 19:46:30.171270 2169 controller.go:187] failed to update lease, error: Put "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:46:30.323158 kubelet[2169]: E1002 19:46:30.323104 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.9:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Oct 2 19:46:31.779007 kubelet[2169]: E1002 19:46:31.778963 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:35.110945 kubelet[2169]: E1002 19:46:35.110917 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:40.111223 kubelet[2169]: E1002 19:46:40.111135 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:40.172375 kubelet[2169]: E1002 19:46:40.172325 2169 controller.go:187] failed to update lease, error: Put "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:46:40.324249 kubelet[2169]: E1002 19:46:40.324200 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.9:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:46:42.085110 kubelet[2169]: E1002 19:46:42.085070 2169 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-apiserver-localhost" Oct 2 19:46:42.085558 kubelet[2169]: E1002 19:46:42.085480 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:44.297430 kubelet[2169]: E1002 19:46:44.297306 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.178a61ff9d24e805", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"275", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Rebooted", Message:"Node localhost has been rebooted, boot id: b03dde40-2dfa-4722-a5a3-8a83b4c6afbc", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 10, 292706171, time.Local), Count:2, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within requested timeout - context deadline exceeded' (will not retry!) Oct 2 19:46:45.112607 kubelet[2169]: E1002 19:46:45.112578 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:50.113072 kubelet[2169]: E1002 19:46:50.113040 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:50.173045 kubelet[2169]: E1002 19:46:50.172984 2169 controller.go:187] failed to update lease, error: Put "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:46:50.325178 kubelet[2169]: E1002 19:46:50.325125 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes localhost)" Oct 2 19:46:50.325178 kubelet[2169]: E1002 19:46:50.325169 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:46:51.486048 kubelet[2169]: E1002 19:46:51.486010 2169 controller.go:187] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "localhost": StorageError: invalid object, Code: 4, Key: /registry/leases/kube-node-lease/localhost, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ad2b6a95-4bfe-40c1-b9d6-6ade99f3da45, UID in object meta: Oct 2 19:46:51.489464 kubelet[2169]: E1002 19:46:51.489442 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:46:51.489464 kubelet[2169]: I1002 19:46:51.489461 2169 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Oct 2 19:46:51.491968 kubelet[2169]: E1002 19:46:51.491948 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:46:55.114398 kubelet[2169]: E1002 19:46:55.114370 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:55.156658 kubelet[2169]: E1002 19:46:55.156538 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a61ffb39a2bf1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 11, 779659214, time.Local), Count:7, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.160679 kubelet[2169]: E1002 19:46:55.160575 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-scheduler-localhost.178a61ffb0f1ada8", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-localhost", UID:"c93b09bfbcf18533230bb5ccb3fad651", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 12, 421082089, time.Local), Count:7, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.162574 kubelet[2169]: E1002 19:46:55.162515 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-localhost.178a62031afb19b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-localhost", UID:"e2d649c28e59aa456441e56e4e0afa1d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Startup probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 15, 135500729, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 15, 135500729, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.166809 kubelet[2169]: E1002 19:46:55.166740 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a61ffb39a2bf1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 150549883, time.Local), Count:8, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.168531 kubelet[2169]: E1002 19:46:55.168479 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a6204460675cc", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}, Reason:"Pulled", Message:"Container image \"registry.k8s.io/kube-controller-manager:v1.25.14\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 152632780, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 152632780, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.170287 kubelet[2169]: E1002 19:46:55.170172 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a6204634a3431", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}, Reason:"Created", Message:"Created container kube-controller-manager", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 643611697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 643611697, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.172261 kubelet[2169]: E1002 19:46:55.172138 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a620476253469", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-controller-manager}"}, Reason:"Started", Message:"Started container kube-controller-manager", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 959954025, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 20, 959954025, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.357094 kubelet[2169]: E1002 19:46:55.356993 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a61ffb39a2bf1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 21, 170508823, time.Local), Count:9, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:55.755837 kubelet[2169]: E1002 19:46:55.755723 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a61ffb39a2bf1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 22, 172437344, time.Local), Count:10, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:56.361161 kubelet[2169]: E1002 19:46:56.361057 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-localhost.178a62031afb19b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-localhost", UID:"e2d649c28e59aa456441e56e4e0afa1d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Startup probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 15, 135500729, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 25, 132362697, time.Local), Count:2, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:56.557027 kubelet[2169]: E1002 19:46:56.556917 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-controller-manager-localhost.178a61ffb39a2bf1", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-localhost", UID:"9d0e2e5c9b81579e9514841512d0dba5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 31, 778936524, time.Local), Count:11, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:56.956106 kubelet[2169]: E1002 19:46:56.956004 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-localhost.178a62031afb19b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-localhost", UID:"e2d649c28e59aa456441e56e4e0afa1d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Startup probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 15, 135500729, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 35, 117307669, time.Local), Count:3, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:57.355763 kubelet[2169]: E1002 19:46:57.355603 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-localhost.178a61ffb99d201c", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-localhost", UID:"e2d649c28e59aa456441e56e4e0afa1d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"DNSConfigForming", Message:"Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 0, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 42, 85464003, time.Local), Count:8, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:57.756405 kubelet[2169]: E1002 19:46:57.756301 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-localhost.178a62031afb19b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-localhost", UID:"e2d649c28e59aa456441e56e4e0afa1d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Startup probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 15, 135500729, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 45, 117523210, time.Local), Count:4, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:46:58.156064 kubelet[2169]: E1002 19:46:58.155974 2169 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-localhost.178a62031afb19b9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-localhost", UID:"e2d649c28e59aa456441e56e4e0afa1d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Startup probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 46, 15, 135500729, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 46, 55, 119091902, time.Local), Count:5, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: requested lease not found' (will not retry!) Oct 2 19:47:00.115017 kubelet[2169]: E1002 19:47:00.114971 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:00.656967 kubelet[2169]: E1002 19:47:00.656935 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:01.004550 kubelet[2169]: E1002 19:47:01.004242 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:01.039662 kubelet[2169]: E1002 19:47:01.039617 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:01.057401 kubelet[2169]: E1002 19:47:01.057364 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:01.058953 kubelet[2169]: E1002 19:47:01.058923 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:01.058953 kubelet[2169]: E1002 19:47:01.058944 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:47:01.846461 kubelet[2169]: E1002 19:47:01.846425 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:47:03.164829 kubelet[2169]: I1002 19:47:03.164780 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=c32568ce-3b6f-4ce7-bd33-13e99f8ca93e Oct 2 19:47:03.167065 kubelet[2169]: E1002 19:47:03.167052 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:03.236800 kubelet[2169]: I1002 19:47:03.236746 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=c32568ce-3b6f-4ce7-bd33-13e99f8ca93e Oct 2 19:47:03.238527 kubelet[2169]: E1002 19:47:03.238498 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:05.115902 kubelet[2169]: E1002 19:47:05.115860 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:10.116751 kubelet[2169]: E1002 19:47:10.116713 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:11.181583 kubelet[2169]: E1002 19:47:11.181547 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:11.555938 kubelet[2169]: E1002 19:47:11.555668 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:11.811752 kubelet[2169]: E1002 19:47:11.811629 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:11.813358 kubelet[2169]: E1002 19:47:11.813327 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:11.814590 kubelet[2169]: E1002 19:47:11.814553 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:11.814590 kubelet[2169]: E1002 19:47:11.814574 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:47:11.978593 kubelet[2169]: E1002 19:47:11.978561 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:47:15.117954 kubelet[2169]: E1002 19:47:15.117918 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:20.118745 kubelet[2169]: E1002 19:47:20.118722 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:21.944927 kubelet[2169]: E1002 19:47:21.944894 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:21Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:21.957030 kubelet[2169]: E1002 19:47:21.956996 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:21.958855 kubelet[2169]: E1002 19:47:21.958840 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:21.960580 kubelet[2169]: E1002 19:47:21.960554 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:21.962123 kubelet[2169]: E1002 19:47:21.962098 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:21.962123 kubelet[2169]: E1002 19:47:21.962113 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:47:22.072736 kubelet[2169]: E1002 19:47:22.072702 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:47:25.121681 kubelet[2169]: E1002 19:47:25.121653 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:26.067528 kubelet[2169]: E1002 19:47:26.067477 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:30.122107 kubelet[2169]: E1002 19:47:30.122080 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:32.169385 kubelet[2169]: E1002 19:47:32.169318 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:31Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:31Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:31Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:31Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:32.171328 kubelet[2169]: E1002 19:47:32.171305 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:32.172622 kubelet[2169]: E1002 19:47:32.172601 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:32.174296 kubelet[2169]: E1002 19:47:32.174254 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:32.175637 kubelet[2169]: E1002 19:47:32.175616 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:32.175701 kubelet[2169]: E1002 19:47:32.175663 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:47:32.354633 kubelet[2169]: E1002 19:47:32.354593 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:47:35.123092 kubelet[2169]: E1002 19:47:35.123062 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:40.123546 kubelet[2169]: E1002 19:47:40.123517 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:42.561072 kubelet[2169]: E1002 19:47:42.561022 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:42.571740 kubelet[2169]: E1002 19:47:42.571684 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:42.573508 kubelet[2169]: E1002 19:47:42.573493 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:42.575383 kubelet[2169]: E1002 19:47:42.575351 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:42.576846 kubelet[2169]: E1002 19:47:42.576829 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:42.576846 kubelet[2169]: E1002 19:47:42.576844 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:47:42.713032 kubelet[2169]: E1002 19:47:42.712999 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:47:45.124827 kubelet[2169]: E1002 19:47:45.124780 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:50.125212 kubelet[2169]: E1002 19:47:50.125171 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:52.731020 kubelet[2169]: E1002 19:47:52.730972 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:47:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:52.750164 kubelet[2169]: E1002 19:47:52.750135 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:52.751838 kubelet[2169]: E1002 19:47:52.751815 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:52.751963 kubelet[2169]: E1002 19:47:52.751851 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:47:52.753222 kubelet[2169]: E1002 19:47:52.753205 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:52.754447 kubelet[2169]: E1002 19:47:52.754430 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:47:52.754447 kubelet[2169]: E1002 19:47:52.754444 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:47:54.067160 kubelet[2169]: E1002 19:47:54.067064 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:55.126590 kubelet[2169]: E1002 19:47:55.126561 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:00.127931 kubelet[2169]: E1002 19:48:00.127891 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:02.932055 kubelet[2169]: E1002 19:48:02.932028 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:02.996797 kubelet[2169]: E1002 19:48:02.996757 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:02.998368 kubelet[2169]: E1002 19:48:02.998348 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:48:02.998551 kubelet[2169]: E1002 19:48:02.998528 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:02.999680 kubelet[2169]: E1002 19:48:02.999662 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:03.001202 kubelet[2169]: E1002 19:48:03.001187 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:03.001202 kubelet[2169]: E1002 19:48:03.001203 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:48:05.128469 kubelet[2169]: E1002 19:48:05.128440 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:07.067453 kubelet[2169]: I1002 19:48:07.067366 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=c32568ce-3b6f-4ce7-bd33-13e99f8ca93e Oct 2 19:48:07.069826 kubelet[2169]: E1002 19:48:07.069809 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:48:10.129948 kubelet[2169]: E1002 19:48:10.129919 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:13.073068 kubelet[2169]: E1002 19:48:13.073036 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:48:13.074597 kubelet[2169]: E1002 19:48:13.074554 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:13.076279 kubelet[2169]: E1002 19:48:13.076259 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:13.077566 kubelet[2169]: E1002 19:48:13.077551 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:13.078996 kubelet[2169]: E1002 19:48:13.078977 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:13.080198 kubelet[2169]: E1002 19:48:13.080184 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:13.080198 kubelet[2169]: E1002 19:48:13.080197 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:48:15.131202 kubelet[2169]: E1002 19:48:15.131178 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:20.132083 kubelet[2169]: E1002 19:48:20.132041 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:23.109941 kubelet[2169]: E1002 19:48:23.109905 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:48:23.169824 kubelet[2169]: E1002 19:48:23.169778 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:23.171475 kubelet[2169]: E1002 19:48:23.171453 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:23.172653 kubelet[2169]: E1002 19:48:23.172626 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:23.174067 kubelet[2169]: E1002 19:48:23.174039 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:23.175374 kubelet[2169]: E1002 19:48:23.175357 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:23.175374 kubelet[2169]: E1002 19:48:23.175369 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:48:25.133265 kubelet[2169]: E1002 19:48:25.133236 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:30.134109 kubelet[2169]: E1002 19:48:30.134077 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:33.238865 kubelet[2169]: E1002 19:48:33.238830 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:33.240352 kubelet[2169]: E1002 19:48:33.240327 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:33.241664 kubelet[2169]: E1002 19:48:33.241639 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:33.242890 kubelet[2169]: E1002 19:48:33.242868 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:33.244029 kubelet[2169]: E1002 19:48:33.244009 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:33.244029 kubelet[2169]: E1002 19:48:33.244021 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:48:33.506252 kubelet[2169]: E1002 19:48:33.506138 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:48:35.135403 kubelet[2169]: E1002 19:48:35.135377 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:40.135919 kubelet[2169]: E1002 19:48:40.135892 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:43.535851 kubelet[2169]: E1002 19:48:43.535826 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:43.537237 kubelet[2169]: E1002 19:48:43.537219 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:43.538359 kubelet[2169]: E1002 19:48:43.538336 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:43.539338 kubelet[2169]: E1002 19:48:43.539313 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:43.540493 kubelet[2169]: E1002 19:48:43.540480 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:43.540493 kubelet[2169]: E1002 19:48:43.540494 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:48:43.769220 kubelet[2169]: E1002 19:48:43.769183 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:48:45.136718 kubelet[2169]: E1002 19:48:45.136690 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:50.137378 kubelet[2169]: E1002 19:48:50.137343 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:48:51.067495 kubelet[2169]: E1002 19:48:51.067457 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:48:53.741297 kubelet[2169]: E1002 19:48:53.741269 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:48:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:53.742519 kubelet[2169]: E1002 19:48:53.742499 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:53.743517 kubelet[2169]: E1002 19:48:53.743501 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:53.744550 kubelet[2169]: E1002 19:48:53.744537 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:53.745608 kubelet[2169]: E1002 19:48:53.745587 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:48:53.745608 kubelet[2169]: E1002 19:48:53.745605 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:48:53.959396 kubelet[2169]: E1002 19:48:53.959367 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:48:55.147664 kubelet[2169]: E1002 19:48:55.147563 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:00.152862 kubelet[2169]: E1002 19:49:00.152830 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:02.070916 kubelet[2169]: E1002 19:49:02.069745 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:49:04.057224 kubelet[2169]: E1002 19:49:04.055189 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:04Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:04.103969 kubelet[2169]: E1002 19:49:04.103102 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:04.112928 kubelet[2169]: E1002 19:49:04.112717 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:04.156550 kubelet[2169]: E1002 19:49:04.156509 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:04.171231 kubelet[2169]: E1002 19:49:04.171199 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:04.171456 kubelet[2169]: E1002 19:49:04.171439 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:49:04.240550 kubelet[2169]: E1002 19:49:04.240513 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:49:05.154925 kubelet[2169]: E1002 19:49:05.154824 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:10.156106 kubelet[2169]: E1002 19:49:10.156076 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:14.378185 kubelet[2169]: E1002 19:49:14.378139 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:49:14.451820 kubelet[2169]: E1002 19:49:14.451772 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:14Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:14Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:14Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:14Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:14.454300 kubelet[2169]: E1002 19:49:14.454286 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:14.455519 kubelet[2169]: E1002 19:49:14.455482 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:14.456686 kubelet[2169]: E1002 19:49:14.456670 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:14.457922 kubelet[2169]: E1002 19:49:14.457899 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:14.457922 kubelet[2169]: E1002 19:49:14.457918 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:49:15.157466 kubelet[2169]: E1002 19:49:15.157433 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:20.161847 kubelet[2169]: E1002 19:49:20.158879 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:24.453691 kubelet[2169]: E1002 19:49:24.453653 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:49:24.760224 kubelet[2169]: E1002 19:49:24.759675 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:24.794078 kubelet[2169]: E1002 19:49:24.791618 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:24.803136 kubelet[2169]: E1002 19:49:24.803086 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:24.812851 kubelet[2169]: E1002 19:49:24.812744 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:24.823147 kubelet[2169]: E1002 19:49:24.822726 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:24.823147 kubelet[2169]: E1002 19:49:24.822762 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:49:25.159947 kubelet[2169]: E1002 19:49:25.159862 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:30.161941 kubelet[2169]: E1002 19:49:30.161909 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:32.068803 kubelet[2169]: I1002 19:49:32.068039 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=c32568ce-3b6f-4ce7-bd33-13e99f8ca93e Oct 2 19:49:32.074673 kubelet[2169]: E1002 19:49:32.074276 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:49:34.595958 kubelet[2169]: E1002 19:49:34.595871 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:49:35.164325 kubelet[2169]: E1002 19:49:35.164058 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:35.203473 kubelet[2169]: E1002 19:49:35.203427 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:35.247947 kubelet[2169]: E1002 19:49:35.247898 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:35.256291 kubelet[2169]: E1002 19:49:35.256216 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:35.266979 kubelet[2169]: E1002 19:49:35.266770 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:35.275814 kubelet[2169]: E1002 19:49:35.273225 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:35.275814 kubelet[2169]: E1002 19:49:35.273265 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:49:40.165829 kubelet[2169]: E1002 19:49:40.165636 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:44.792374 kubelet[2169]: E1002 19:49:44.792255 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:49:45.170469 kubelet[2169]: E1002 19:49:45.170437 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:45.547413 kubelet[2169]: E1002 19:49:45.547287 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:45Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:45.564857 kubelet[2169]: E1002 19:49:45.564817 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:45.580936 kubelet[2169]: E1002 19:49:45.579668 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:45.593317 kubelet[2169]: E1002 19:49:45.592381 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:45.610927 kubelet[2169]: E1002 19:49:45.610321 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:45.610927 kubelet[2169]: E1002 19:49:45.610358 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:49:50.171986 kubelet[2169]: E1002 19:49:50.171556 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:54.970509 kubelet[2169]: E1002 19:49:54.970428 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:49:55.173020 kubelet[2169]: E1002 19:49:55.172989 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:49:55.890430 kubelet[2169]: E1002 19:49:55.890324 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:49:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:55.919413 kubelet[2169]: E1002 19:49:55.919216 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:55.932975 kubelet[2169]: E1002 19:49:55.929241 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:55.949052 kubelet[2169]: E1002 19:49:55.945264 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:55.961402 kubelet[2169]: E1002 19:49:55.961276 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:49:55.961402 kubelet[2169]: E1002 19:49:55.961304 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:50:00.183083 kubelet[2169]: E1002 19:50:00.179073 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:50:05.189301 kubelet[2169]: E1002 19:50:05.189240 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:50:05.287174 kubelet[2169]: E1002 19:50:05.287136 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 2 19:50:06.074826 kubelet[2169]: E1002 19:50:06.074642 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:50:06.194145 kubelet[2169]: E1002 19:50:06.194087 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": nodes \"localhost\" not found" Oct 2 19:50:06.220293 kubelet[2169]: E1002 19:50:06.220217 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:50:06.228731 kubelet[2169]: E1002 19:50:06.227355 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:50:06.233966 kubelet[2169]: E1002 19:50:06.233762 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:50:06.242391 kubelet[2169]: E1002 19:50:06.242137 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"localhost\": nodes \"localhost\" not found" Oct 2 19:50:06.242391 kubelet[2169]: E1002 19:50:06.242163 2169 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 19:50:09.180551 kubelet[2169]: E1002 19:50:09.180472 2169 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:50:10.190383 kubelet[2169]: E1002 19:50:10.190336 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:50:15.191818 kubelet[2169]: E1002 19:50:15.191739 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:50:24.158053 kubelet[2169]: E1002 19:50:20.192668 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:50:24.158053 kubelet[2169]: I1002 19:50:23.158797 2169 kubelet.go:1694] "Trying to delete pod" pod="kube-system/kube-apiserver-localhost" podUID=c32568ce-3b6f-4ce7-bd33-13e99f8ca93e Oct 2 19:50:25.193772 kubelet[2169]: E1002 19:50:25.193733 2169 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:50:25.510427 kubelet[2169]: E1002 19:50:25.510039 2169 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:50:26.429099 kubelet[2169]: E1002 19:50:26.429017 2169 kubelet_node_status.go:460] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"addresses\\\":[{\\\"address\\\":\\\"10.0.0.9\\\",\\\"type\\\":\\\"InternalIP\\\"},{\\\"address\\\":\\\"localhost\\\",\\\"type\\\":\\\"Hostname\\\"},{\\\"$patch\\\":\\\"replace\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2023-10-02T19:50:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c\\\",\\\"registry.k8s.io/etcd:3.5.6-0\\\"],\\\"sizeBytes\\\":102542580},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:bb3eff7d20d94d44ffea47066a4400a70ede58abd9de01c80400817a955397b4\\\",\\\"registry.k8s.io/kube-apiserver:v1.25.14\\\"],\\\"sizeBytes\\\":35071759},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:44d63bed8020f7610851a1a653ad7c6df83bd02ad128303de939a39997854ace\\\",\\\"registry.k8s.io/kube-controller-manager:v1.25.14\\\"],\\\"sizeBytes\\\":31934192},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326\\\",\\\"registry.k8s.io/kube-proxy:v1.25.14\\\"],\\\"sizeBytes\\\":20490264},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:207b36120eca76bf8607682a3ee37e12b5156b921a9379d776b297ab01ca8198\\\",\\\"registry.k8s.io/kube-scheduler:v1.25.14\\\"],\\\"sizeBytes\\\":16247360},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a\\\",\\\"registry.k8s.io/coredns/coredns:v1.9.3\\\"],\\\"sizeBytes\\\":14837849},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b03dde40-2dfa-4722-a5a3-8a83b4c6afbc\\\",\\\"kubeProxyVersion\\\":\\\"v1.25.10\\\",\\\"kubeletVersion\\\":\\\"v1.25.10\\\",\\\"machineID\\\":\\\"2806a812693d422ea44d06fe612e82fc\\\",\\\"systemUUID\\\":\\\"2806a812-693d-422e-a44d-06fe612e82fc\\\"}}}\" for node \"localhost\": Patch \"https://10.0.0.9:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded"