Feb 12 19:40:48.015731 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 12 19:40:48.015768 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:40:48.015789 kernel: BIOS-provided physical RAM map: Feb 12 19:40:48.015803 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 19:40:48.015816 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 19:40:48.015830 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 19:40:48.015847 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 12 19:40:48.015861 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 12 19:40:48.015878 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 19:40:48.015892 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 19:40:48.015907 kernel: NX (Execute Disable) protection: active Feb 12 19:40:48.015921 kernel: SMBIOS 2.8 present. Feb 12 19:40:48.015935 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 12 19:40:48.015950 kernel: Hypervisor detected: KVM Feb 12 19:40:48.015969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:40:48.015988 kernel: kvm-clock: cpu 0, msr 57faa001, primary cpu clock Feb 12 19:40:48.016004 kernel: kvm-clock: using sched offset of 4212976495 cycles Feb 12 19:40:48.016021 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:40:48.016037 kernel: tsc: Detected 2294.608 MHz processor Feb 12 19:40:48.016053 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:40:48.016070 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:40:48.016085 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 12 19:40:48.016101 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:40:48.016120 kernel: ACPI: Early table checksum verification disabled Feb 12 19:40:48.016136 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 12 19:40:48.016152 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016168 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016184 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016200 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 19:40:48.016216 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016231 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016247 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016266 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:48.016282 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 12 19:40:48.016298 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 12 19:40:48.016314 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 19:40:48.016334 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 12 19:40:48.016354 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 12 19:40:48.016370 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 12 19:40:48.016387 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 12 19:40:48.016413 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:40:48.018593 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:40:48.018616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 19:40:48.018627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 19:40:48.018638 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 12 19:40:48.018649 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 12 19:40:48.018666 kernel: Zone ranges: Feb 12 19:40:48.018676 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:40:48.018684 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 12 19:40:48.018693 kernel: Normal empty Feb 12 19:40:48.018702 kernel: Movable zone start for each node Feb 12 19:40:48.018710 kernel: Early memory node ranges Feb 12 19:40:48.018719 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 19:40:48.018727 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 12 19:40:48.018736 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 12 19:40:48.018747 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:40:48.018756 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 19:40:48.018765 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 12 19:40:48.018774 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 19:40:48.018782 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:40:48.018791 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:40:48.018799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:40:48.018808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:40:48.018816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:40:48.018827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:40:48.018836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:40:48.018844 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:40:48.018853 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:40:48.018861 kernel: TSC deadline timer available Feb 12 19:40:48.018870 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:40:48.018879 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 19:40:48.018887 kernel: Booting paravirtualized kernel on KVM Feb 12 19:40:48.018896 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:40:48.018907 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:40:48.018915 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:40:48.018924 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:40:48.018932 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:40:48.018940 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 19:40:48.018948 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 19:40:48.018957 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 12 19:40:48.018965 kernel: Policy zone: DMA32 Feb 12 19:40:48.018975 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:40:48.018987 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:40:48.018995 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:40:48.019004 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:40:48.019012 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:40:48.019021 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 19:40:48.019030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:40:48.019038 kernel: Kernel/User page tables isolation: enabled Feb 12 19:40:48.019046 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:40:48.019064 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:40:48.019086 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:40:48.019108 kernel: rcu: RCU event tracing is enabled. Feb 12 19:40:48.019130 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:40:48.019151 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:40:48.019172 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:40:48.019193 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:40:48.019202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:40:48.019210 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 19:40:48.019223 kernel: random: crng init done Feb 12 19:40:48.019231 kernel: Console: colour VGA+ 80x25 Feb 12 19:40:48.019240 kernel: printk: console [tty0] enabled Feb 12 19:40:48.019248 kernel: printk: console [ttyS0] enabled Feb 12 19:40:48.019257 kernel: ACPI: Core revision 20210730 Feb 12 19:40:48.019284 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:40:48.019297 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:40:48.019311 kernel: x2apic enabled Feb 12 19:40:48.019323 kernel: Switched APIC routing to physical x2apic. Feb 12 19:40:48.019339 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:40:48.019350 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Feb 12 19:40:48.019367 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Feb 12 19:40:48.019386 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 19:40:48.019403 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 19:40:48.019420 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:40:48.019452 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:40:48.019469 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:40:48.019487 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:40:48.019508 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 12 19:40:48.019536 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:40:48.019554 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:40:48.019575 kernel: MDS: Mitigation: Clear CPU buffers Feb 12 19:40:48.019593 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:40:48.019611 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:40:48.019629 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:40:48.019647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:40:48.019665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:40:48.019684 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:40:48.019705 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:40:48.019723 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:40:48.019741 kernel: LSM: Security Framework initializing Feb 12 19:40:48.019759 kernel: SELinux: Initializing. Feb 12 19:40:48.019777 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:40:48.019795 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:40:48.019816 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2) Feb 12 19:40:48.019834 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 12 19:40:48.019852 kernel: signal: max sigframe size: 1776 Feb 12 19:40:48.019870 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:40:48.019888 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:40:48.019906 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:40:48.019924 kernel: x86: Booting SMP configuration: Feb 12 19:40:48.019942 kernel: .... node #0, CPUs: #1 Feb 12 19:40:48.019960 kernel: kvm-clock: cpu 1, msr 57faa041, secondary cpu clock Feb 12 19:40:48.019978 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 19:40:48.019999 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:40:48.020017 kernel: smpboot: Max logical packages: 1 Feb 12 19:40:48.020035 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Feb 12 19:40:48.020054 kernel: devtmpfs: initialized Feb 12 19:40:48.020072 kernel: x86/mm: Memory block size: 128MB Feb 12 19:40:48.020090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:40:48.020108 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:40:48.020126 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:40:48.020144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:40:48.020165 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:40:48.020184 kernel: audit: type=2000 audit(1707766845.954:1): state=initialized audit_enabled=0 res=1 Feb 12 19:40:48.020201 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:40:48.020219 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:40:48.020237 kernel: cpuidle: using governor menu Feb 12 19:40:48.020256 kernel: ACPI: bus type PCI registered Feb 12 19:40:48.020274 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:40:48.020292 kernel: dca service started, version 1.12.1 Feb 12 19:40:48.020309 kernel: PCI: Using configuration type 1 for base access Feb 12 19:40:48.020331 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:40:48.020349 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:40:48.020367 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:40:48.020385 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:40:48.020403 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:40:48.020421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:40:48.020449 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:40:48.020467 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:40:48.020485 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:40:48.020507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:40:48.020535 kernel: ACPI: Interpreter enabled Feb 12 19:40:48.020551 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:40:48.020564 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:40:48.020577 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:40:48.020590 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:40:48.020606 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:40:48.020879 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:40:48.021026 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 19:40:48.021050 kernel: acpiphp: Slot [3] registered Feb 12 19:40:48.021068 kernel: acpiphp: Slot [4] registered Feb 12 19:40:48.021086 kernel: acpiphp: Slot [5] registered Feb 12 19:40:48.021104 kernel: acpiphp: Slot [6] registered Feb 12 19:40:48.021122 kernel: acpiphp: Slot [7] registered Feb 12 19:40:48.021140 kernel: acpiphp: Slot [8] registered Feb 12 19:40:48.021158 kernel: acpiphp: Slot [9] registered Feb 12 19:40:48.021180 kernel: acpiphp: Slot [10] registered Feb 12 19:40:48.021198 kernel: acpiphp: Slot [11] registered Feb 12 19:40:48.021216 kernel: acpiphp: Slot [12] registered Feb 12 19:40:48.021235 kernel: acpiphp: Slot [13] registered Feb 12 19:40:48.021253 kernel: acpiphp: Slot [14] registered Feb 12 19:40:48.021271 kernel: acpiphp: Slot [15] registered Feb 12 19:40:48.021289 kernel: acpiphp: Slot [16] registered Feb 12 19:40:48.021307 kernel: acpiphp: Slot [17] registered Feb 12 19:40:48.021325 kernel: acpiphp: Slot [18] registered Feb 12 19:40:48.021343 kernel: acpiphp: Slot [19] registered Feb 12 19:40:48.021364 kernel: acpiphp: Slot [20] registered Feb 12 19:40:48.021382 kernel: acpiphp: Slot [21] registered Feb 12 19:40:48.021400 kernel: acpiphp: Slot [22] registered Feb 12 19:40:48.021418 kernel: acpiphp: Slot [23] registered Feb 12 19:40:48.026504 kernel: acpiphp: Slot [24] registered Feb 12 19:40:48.026526 kernel: acpiphp: Slot [25] registered Feb 12 19:40:48.026541 kernel: acpiphp: Slot [26] registered Feb 12 19:40:48.026561 kernel: acpiphp: Slot [27] registered Feb 12 19:40:48.026571 kernel: acpiphp: Slot [28] registered Feb 12 19:40:48.026588 kernel: acpiphp: Slot [29] registered Feb 12 19:40:48.026596 kernel: acpiphp: Slot [30] registered Feb 12 19:40:48.026605 kernel: acpiphp: Slot [31] registered Feb 12 19:40:48.026614 kernel: PCI host bridge to bus 0000:00 Feb 12 19:40:48.026846 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:40:48.026953 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:40:48.027041 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:40:48.027128 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 19:40:48.027219 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 19:40:48.027303 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:40:48.027487 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:40:48.027619 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:40:48.027805 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:40:48.027979 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 12 19:40:48.028126 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:40:48.028283 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:40:48.028451 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:40:48.028597 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:40:48.028760 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 12 19:40:48.028909 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 12 19:40:48.029061 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:40:48.029210 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 19:40:48.029352 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 19:40:48.029590 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 19:40:48.029744 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 19:40:48.029891 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 19:40:48.030034 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 12 19:40:48.030195 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 19:40:48.030343 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:40:48.030616 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:40:48.030801 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 12 19:40:48.030989 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 12 19:40:48.031133 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 19:40:48.031309 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:40:48.038562 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 12 19:40:48.038762 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 12 19:40:48.038927 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 19:40:48.039121 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 12 19:40:48.039282 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 12 19:40:48.039472 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 12 19:40:48.039629 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 19:40:48.039808 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:40:48.039948 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:40:48.040083 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 12 19:40:48.040230 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 19:40:48.040405 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:40:48.040582 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 12 19:40:48.040729 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 12 19:40:48.040902 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 12 19:40:48.041058 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 19:40:48.041206 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 12 19:40:48.041352 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 12 19:40:48.041375 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:40:48.041395 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:40:48.041414 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:40:48.041461 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:40:48.041480 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:40:48.041499 kernel: iommu: Default domain type: Translated Feb 12 19:40:48.041517 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:40:48.041696 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:40:48.041878 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:40:48.042035 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:40:48.042059 kernel: vgaarb: loaded Feb 12 19:40:48.042084 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:40:48.042103 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:40:48.042122 kernel: PTP clock support registered Feb 12 19:40:48.042140 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:40:48.042159 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:40:48.042177 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 19:40:48.042195 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 12 19:40:48.042213 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:40:48.042232 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:40:48.042261 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:40:48.042284 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:40:48.042305 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:40:48.042323 kernel: pnp: PnP ACPI init Feb 12 19:40:48.042342 kernel: pnp: PnP ACPI: found 4 devices Feb 12 19:40:48.042361 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:40:48.042402 kernel: NET: Registered PF_INET protocol family Feb 12 19:40:48.042425 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:40:48.042478 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 19:40:48.042501 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:40:48.042519 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:40:48.042538 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 19:40:48.042558 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 19:40:48.042578 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:40:48.042597 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:40:48.042616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:40:48.042634 kernel: NET: Registered PF_XDP protocol family Feb 12 19:40:48.042798 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:40:48.042927 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:40:48.043081 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:40:48.043207 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 19:40:48.043343 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 19:40:48.043533 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:40:48.043679 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:40:48.043819 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:40:48.043847 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 19:40:48.043984 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 45929 usecs Feb 12 19:40:48.044007 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:40:48.044026 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:40:48.044045 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Feb 12 19:40:48.044064 kernel: Initialise system trusted keyrings Feb 12 19:40:48.044082 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 19:40:48.044108 kernel: Key type asymmetric registered Feb 12 19:40:48.044122 kernel: Asymmetric key parser 'x509' registered Feb 12 19:40:48.044140 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:40:48.044152 kernel: io scheduler mq-deadline registered Feb 12 19:40:48.044166 kernel: io scheduler kyber registered Feb 12 19:40:48.044187 kernel: io scheduler bfq registered Feb 12 19:40:48.044205 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:40:48.044224 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 19:40:48.044242 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:40:48.044260 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:40:48.044279 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:40:48.044298 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:40:48.044327 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:40:48.044342 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:40:48.044355 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:40:48.044568 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 12 19:40:48.044598 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:40:48.044727 kernel: rtc_cmos 00:03: registered as rtc0 Feb 12 19:40:48.044859 kernel: rtc_cmos 00:03: setting system clock to 2024-02-12T19:40:47 UTC (1707766847) Feb 12 19:40:48.045024 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 12 19:40:48.045058 kernel: intel_pstate: CPU model not supported Feb 12 19:40:48.045075 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:40:48.045089 kernel: Segment Routing with IPv6 Feb 12 19:40:48.045106 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:40:48.045129 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:40:48.045155 kernel: Key type dns_resolver registered Feb 12 19:40:48.045174 kernel: IPI shorthand broadcast: enabled Feb 12 19:40:48.045194 kernel: sched_clock: Marking stable (907235407, 161083719)->(1304722470, -236403344) Feb 12 19:40:48.045224 kernel: registered taskstats version 1 Feb 12 19:40:48.045244 kernel: Loading compiled-in X.509 certificates Feb 12 19:40:48.045265 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 12 19:40:48.045283 kernel: Key type .fscrypt registered Feb 12 19:40:48.045301 kernel: Key type fscrypt-provisioning registered Feb 12 19:40:48.045319 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:40:48.045337 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:40:48.045361 kernel: ima: No architecture policies found Feb 12 19:40:48.045384 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:40:48.045411 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:40:48.053525 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:40:48.053566 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:40:48.053588 kernel: Run /init as init process Feb 12 19:40:48.053610 kernel: with arguments: Feb 12 19:40:48.053632 kernel: /init Feb 12 19:40:48.053685 kernel: with environment: Feb 12 19:40:48.053711 kernel: HOME=/ Feb 12 19:40:48.053730 kernel: TERM=linux Feb 12 19:40:48.053756 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:40:48.053785 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:40:48.053834 systemd[1]: Detected virtualization kvm. Feb 12 19:40:48.053854 systemd[1]: Detected architecture x86-64. Feb 12 19:40:48.053867 systemd[1]: Running in initrd. Feb 12 19:40:48.053880 systemd[1]: No hostname configured, using default hostname. Feb 12 19:40:48.053893 systemd[1]: Hostname set to . Feb 12 19:40:48.053912 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:40:48.053926 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:40:48.053940 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:40:48.053954 systemd[1]: Reached target cryptsetup.target. Feb 12 19:40:48.053967 systemd[1]: Reached target paths.target. Feb 12 19:40:48.053982 systemd[1]: Reached target slices.target. Feb 12 19:40:48.053996 systemd[1]: Reached target swap.target. Feb 12 19:40:48.054017 systemd[1]: Reached target timers.target. Feb 12 19:40:48.054042 systemd[1]: Listening on iscsid.socket. Feb 12 19:40:48.054062 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:40:48.054081 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:40:48.054100 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:40:48.054120 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:40:48.054140 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:40:48.054159 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:40:48.054179 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:40:48.054202 systemd[1]: Reached target sockets.target. Feb 12 19:40:48.054221 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:40:48.054241 systemd[1]: Finished network-cleanup.service. Feb 12 19:40:48.054264 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:40:48.054284 systemd[1]: Starting systemd-journald.service... Feb 12 19:40:48.054303 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:40:48.054326 systemd[1]: Starting systemd-resolved.service... Feb 12 19:40:48.054345 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:40:48.054455 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:40:48.054479 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:40:48.054505 systemd-journald[184]: Journal started Feb 12 19:40:48.054629 systemd-journald[184]: Runtime Journal (/run/log/journal/c321619e97aa428d86125bb8d4173fb9) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:40:48.016963 systemd-modules-load[185]: Inserted module 'overlay' Feb 12 19:40:48.123170 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:40:48.123207 kernel: Bridge firewalling registered Feb 12 19:40:48.123230 kernel: SCSI subsystem initialized Feb 12 19:40:48.123251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:40:48.123285 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:40:48.123308 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:40:48.062726 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 12 19:40:48.130767 systemd[1]: Started systemd-journald.service. Feb 12 19:40:48.130812 kernel: audit: type=1130 audit(1707766848.123:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.081525 systemd-resolved[186]: Positive Trust Anchors: Feb 12 19:40:48.081542 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:40:48.139280 kernel: audit: type=1130 audit(1707766848.132:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.081620 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:40:48.148061 kernel: audit: type=1130 audit(1707766848.139:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.093905 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 12 19:40:48.154921 kernel: audit: type=1130 audit(1707766848.148:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.114767 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 12 19:40:48.162292 kernel: audit: type=1130 audit(1707766848.155:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.133048 systemd[1]: Started systemd-resolved.service. Feb 12 19:40:48.140195 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:40:48.148983 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:40:48.155977 systemd[1]: Reached target nss-lookup.target. Feb 12 19:40:48.164299 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:40:48.166647 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:40:48.171124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:40:48.187742 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:40:48.195344 kernel: audit: type=1130 audit(1707766848.188:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.188802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:40:48.218656 kernel: audit: type=1130 audit(1707766848.195:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.220932 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:40:48.226646 kernel: audit: type=1130 audit(1707766848.221:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.223052 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:40:48.241172 dracut-cmdline[206]: dracut-dracut-053 Feb 12 19:40:48.244867 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:40:48.344467 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:40:48.360469 kernel: iscsi: registered transport (tcp) Feb 12 19:40:48.388728 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:40:48.388820 kernel: QLogic iSCSI HBA Driver Feb 12 19:40:48.445955 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:40:48.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.448276 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:40:48.453831 kernel: audit: type=1130 audit(1707766848.446:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.515507 kernel: raid6: avx2x4 gen() 16817 MB/s Feb 12 19:40:48.533505 kernel: raid6: avx2x4 xor() 7249 MB/s Feb 12 19:40:48.551502 kernel: raid6: avx2x2 gen() 15691 MB/s Feb 12 19:40:48.569708 kernel: raid6: avx2x2 xor() 12352 MB/s Feb 12 19:40:48.587533 kernel: raid6: avx2x1 gen() 12079 MB/s Feb 12 19:40:48.605558 kernel: raid6: avx2x1 xor() 15996 MB/s Feb 12 19:40:48.623504 kernel: raid6: sse2x4 gen() 11512 MB/s Feb 12 19:40:48.641507 kernel: raid6: sse2x4 xor() 6154 MB/s Feb 12 19:40:48.659523 kernel: raid6: sse2x2 gen() 11573 MB/s Feb 12 19:40:48.677518 kernel: raid6: sse2x2 xor() 6353 MB/s Feb 12 19:40:48.695499 kernel: raid6: sse2x1 gen() 9751 MB/s Feb 12 19:40:48.714077 kernel: raid6: sse2x1 xor() 5488 MB/s Feb 12 19:40:48.714168 kernel: raid6: using algorithm avx2x4 gen() 16817 MB/s Feb 12 19:40:48.714194 kernel: raid6: .... xor() 7249 MB/s, rmw enabled Feb 12 19:40:48.715381 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:40:48.732474 kernel: xor: automatically using best checksumming function avx Feb 12 19:40:48.876481 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:40:48.891556 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:40:48.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.892000 audit: BPF prog-id=7 op=LOAD Feb 12 19:40:48.892000 audit: BPF prog-id=8 op=LOAD Feb 12 19:40:48.893637 systemd[1]: Starting systemd-udevd.service... Feb 12 19:40:48.913280 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 12 19:40:48.921699 systemd[1]: Started systemd-udevd.service. Feb 12 19:40:48.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:48.928935 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:40:48.956731 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Feb 12 19:40:49.006707 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:40:49.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.008731 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:40:49.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:49.070417 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:40:49.155458 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 12 19:40:49.175926 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:40:49.176002 kernel: GPT:9289727 != 125829119 Feb 12 19:40:49.176026 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:40:49.176049 kernel: GPT:9289727 != 125829119 Feb 12 19:40:49.176071 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:40:49.176094 kernel: scsi host0: Virtio SCSI HBA Feb 12 19:40:49.176154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:49.187454 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:40:49.219465 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:40:49.219540 kernel: AES CTR mode by8 optimization enabled Feb 12 19:40:49.222461 kernel: virtio_blk virtio5: [vdb] 952 512-byte logical blocks (487 kB/476 KiB) Feb 12 19:40:49.260473 kernel: ACPI: bus type USB registered Feb 12 19:40:49.263471 kernel: usbcore: registered new interface driver usbfs Feb 12 19:40:49.263552 kernel: usbcore: registered new interface driver hub Feb 12 19:40:49.263584 kernel: usbcore: registered new device driver usb Feb 12 19:40:49.277461 kernel: libata version 3.00 loaded. Feb 12 19:40:49.278960 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:40:49.389203 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:40:49.389469 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 12 19:40:49.389494 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (436) Feb 12 19:40:49.389524 kernel: scsi host1: ata_piix Feb 12 19:40:49.389730 kernel: ehci-pci: EHCI PCI platform driver Feb 12 19:40:49.389752 kernel: scsi host2: ata_piix Feb 12 19:40:49.389942 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 12 19:40:49.389965 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 12 19:40:49.389986 kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 12 19:40:49.390008 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 12 19:40:49.390161 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 12 19:40:49.390335 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 12 19:40:49.390514 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Feb 12 19:40:49.390662 kernel: hub 1-0:1.0: USB hub found Feb 12 19:40:49.390888 kernel: hub 1-0:1.0: 2 ports detected Feb 12 19:40:49.398805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:40:49.408975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:40:49.409777 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:40:49.419242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:40:49.421711 systemd[1]: Starting disk-uuid.service... Feb 12 19:40:49.434607 disk-uuid[504]: Primary Header is updated. Feb 12 19:40:49.434607 disk-uuid[504]: Secondary Entries is updated. Feb 12 19:40:49.434607 disk-uuid[504]: Secondary Header is updated. Feb 12 19:40:49.439775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:49.474464 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:50.466496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:50.466954 disk-uuid[505]: The operation has completed successfully. Feb 12 19:40:50.514982 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:40:50.515151 systemd[1]: Finished disk-uuid.service. Feb 12 19:40:50.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.526857 systemd[1]: Starting verity-setup.service... Feb 12 19:40:50.550511 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:40:50.621692 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:40:50.623185 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:40:50.626735 systemd[1]: Finished verity-setup.service. Feb 12 19:40:50.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.720484 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:40:50.721483 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:40:50.722386 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:40:50.723471 systemd[1]: Starting ignition-setup.service... Feb 12 19:40:50.725579 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:40:50.743965 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:50.744050 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:40:50.744072 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:40:50.772841 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:40:50.788142 systemd[1]: Finished ignition-setup.service. Feb 12 19:40:50.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.790223 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:40:50.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.891000 audit: BPF prog-id=9 op=LOAD Feb 12 19:40:50.890481 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:40:50.893149 systemd[1]: Starting systemd-networkd.service... Feb 12 19:40:50.948033 systemd-networkd[688]: lo: Link UP Feb 12 19:40:50.948055 systemd-networkd[688]: lo: Gained carrier Feb 12 19:40:50.949781 systemd-networkd[688]: Enumeration completed Feb 12 19:40:50.950932 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:40:50.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.951835 systemd[1]: Started systemd-networkd.service. Feb 12 19:40:50.953417 systemd[1]: Reached target network.target. Feb 12 19:40:50.954683 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 12 19:40:50.955468 systemd[1]: Starting iscsiuio.service... Feb 12 19:40:50.960230 systemd-networkd[688]: eth1: Link UP Feb 12 19:40:50.960238 systemd-networkd[688]: eth1: Gained carrier Feb 12 19:40:50.972160 systemd-networkd[688]: eth0: Link UP Feb 12 19:40:50.972170 systemd-networkd[688]: eth0: Gained carrier Feb 12 19:40:50.980143 systemd[1]: Started iscsiuio.service. Feb 12 19:40:50.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.982337 systemd[1]: Starting iscsid.service... Feb 12 19:40:50.988946 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:50.988946 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:40:50.988946 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:40:50.988946 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:40:50.988946 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:40:50.988946 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:50.988946 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:40:50.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:50.990556 systemd[1]: Started iscsid.service. Feb 12 19:40:50.990820 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 Feb 12 19:40:50.996120 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:40:51.009200 systemd-networkd[688]: eth0: DHCPv4 address 143.198.151.132/20, gateway 143.198.144.1 acquired from 169.254.169.253 Feb 12 19:40:51.033743 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:40:51.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.036013 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:40:51.037818 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:40:51.038387 systemd[1]: Reached target remote-fs.target. Feb 12 19:40:51.040937 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:40:51.062401 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:40:51.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.080589 ignition[618]: Ignition 2.14.0 Feb 12 19:40:51.080604 ignition[618]: Stage: fetch-offline Feb 12 19:40:51.080680 ignition[618]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.080712 ignition[618]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:51.087977 ignition[618]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:51.088217 ignition[618]: parsed url from cmdline: "" Feb 12 19:40:51.088223 ignition[618]: no config URL provided Feb 12 19:40:51.088232 ignition[618]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:51.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.090104 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:40:51.088245 ignition[618]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:51.093145 systemd[1]: Starting ignition-fetch.service... Feb 12 19:40:51.088252 ignition[618]: failed to fetch config: resource requires networking Feb 12 19:40:51.088973 ignition[618]: Ignition finished successfully Feb 12 19:40:51.111281 ignition[707]: Ignition 2.14.0 Feb 12 19:40:51.111295 ignition[707]: Stage: fetch Feb 12 19:40:51.111568 ignition[707]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.111592 ignition[707]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:51.114676 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:51.114824 ignition[707]: parsed url from cmdline: "" Feb 12 19:40:51.114829 ignition[707]: no config URL provided Feb 12 19:40:51.114836 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:51.114847 ignition[707]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:51.114893 ignition[707]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 12 19:40:51.140803 ignition[707]: GET result: OK Feb 12 19:40:51.141007 ignition[707]: parsing config with SHA512: 97814a1a254e6334e689063533ed18626f32aacb4288db17e797cbe427ef251a342ef855e346d6a4ff5d759dfd3cc4ec07ea9350d6aa8aecc6a0ea63ef5f26d5 Feb 12 19:40:51.197855 unknown[707]: fetched base config from "system" Feb 12 19:40:51.199170 unknown[707]: fetched base config from "system" Feb 12 19:40:51.200361 unknown[707]: fetched user config from "digitalocean" Feb 12 19:40:51.202786 ignition[707]: fetch: fetch complete Feb 12 19:40:51.203731 ignition[707]: fetch: fetch passed Feb 12 19:40:51.204704 ignition[707]: Ignition finished successfully Feb 12 19:40:51.207810 systemd[1]: Finished ignition-fetch.service. Feb 12 19:40:51.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.211017 systemd[1]: Starting ignition-kargs.service... Feb 12 19:40:51.235718 ignition[713]: Ignition 2.14.0 Feb 12 19:40:51.235738 ignition[713]: Stage: kargs Feb 12 19:40:51.236035 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.236081 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:51.240129 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:51.244581 ignition[713]: kargs: kargs passed Feb 12 19:40:51.244726 ignition[713]: Ignition finished successfully Feb 12 19:40:51.247066 systemd[1]: Finished ignition-kargs.service. Feb 12 19:40:51.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.250754 systemd[1]: Starting ignition-disks.service... Feb 12 19:40:51.276455 ignition[719]: Ignition 2.14.0 Feb 12 19:40:51.276474 ignition[719]: Stage: disks Feb 12 19:40:51.276780 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.276812 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:51.280671 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:51.284284 ignition[719]: disks: disks passed Feb 12 19:40:51.286533 systemd[1]: Finished ignition-disks.service. Feb 12 19:40:51.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.284415 ignition[719]: Ignition finished successfully Feb 12 19:40:51.289091 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:40:51.291591 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:40:51.292214 systemd[1]: Reached target local-fs.target. Feb 12 19:40:51.292857 systemd[1]: Reached target sysinit.target. Feb 12 19:40:51.295230 systemd[1]: Reached target basic.target. Feb 12 19:40:51.299096 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:40:51.335977 systemd-fsck[727]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:40:51.345205 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:40:51.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.347906 systemd[1]: Mounting sysroot.mount... Feb 12 19:40:51.363480 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:40:51.365162 systemd[1]: Mounted sysroot.mount. Feb 12 19:40:51.366023 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:40:51.370311 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:40:51.373172 systemd[1]: Starting flatcar-digitalocean-network.service... Feb 12 19:40:51.377117 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:40:51.381467 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:40:51.383817 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:40:51.388350 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:40:51.393056 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:40:51.406022 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:40:51.434401 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:40:51.448167 initrd-setup-root[755]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:40:51.460417 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:40:51.602224 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:40:51.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.605156 systemd[1]: Starting ignition-mount.service... Feb 12 19:40:51.607777 systemd[1]: Starting sysroot-boot.service... Feb 12 19:40:51.635094 coreos-metadata[734]: Feb 12 19:40:51.634 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:40:51.644479 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:40:51.648249 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:40:51.654519 coreos-metadata[734]: Feb 12 19:40:51.651 INFO Fetch successful Feb 12 19:40:51.677680 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (786) Feb 12 19:40:51.678697 coreos-metadata[734]: Feb 12 19:40:51.678 INFO wrote hostname ci-3510.3.2-d-fc9a4b050f to /sysroot/etc/hostname Feb 12 19:40:51.683616 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:40:51.690921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:51.690962 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:40:51.690986 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:40:51.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.694905 ignition[787]: INFO : Ignition 2.14.0 Feb 12 19:40:51.696331 ignition[787]: INFO : Stage: mount Feb 12 19:40:51.697775 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.699339 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:51.704756 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:51.709466 ignition[787]: INFO : mount: mount passed Feb 12 19:40:51.710691 ignition[787]: INFO : Ignition finished successfully Feb 12 19:40:51.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.713320 systemd[1]: Finished ignition-mount.service. Feb 12 19:40:51.720089 coreos-metadata[733]: Feb 12 19:40:51.719 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:40:51.727415 systemd[1]: Finished sysroot-boot.service. Feb 12 19:40:51.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.736368 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:40:51.739945 systemd[1]: Starting ignition-files.service... Feb 12 19:40:51.760683 coreos-metadata[733]: Feb 12 19:40:51.737 INFO Fetch successful Feb 12 19:40:51.776365 kernel: kauditd_printk_skb: 26 callbacks suppressed Feb 12 19:40:51.776473 kernel: audit: type=1130 audit(1707766851.761:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.776562 kernel: audit: type=1131 audit(1707766851.761:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.760335 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 12 19:40:51.760650 systemd[1]: Finished flatcar-digitalocean-network.service. Feb 12 19:40:51.794261 ignition[813]: INFO : Ignition 2.14.0 Feb 12 19:40:51.794261 ignition[813]: INFO : Stage: files Feb 12 19:40:51.796889 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.796889 ignition[813]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:51.799993 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:51.801220 ignition[813]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:40:51.802385 ignition[813]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:40:51.802385 ignition[813]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:40:51.811857 ignition[813]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:40:51.813578 ignition[813]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:40:51.814952 ignition[813]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:40:51.814952 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:40:51.813686 unknown[813]: wrote ssh authorized keys file for user: core Feb 12 19:40:51.823460 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:40:51.823460 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:40:51.823460 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:40:52.136868 systemd-networkd[688]: eth0: Gained IPv6LL Feb 12 19:40:52.320916 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:40:52.565268 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:40:52.565268 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:40:52.569630 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:40:52.569630 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:40:52.905298 systemd-networkd[688]: eth1: Gained IPv6LL Feb 12 19:40:52.971361 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:40:53.176123 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:40:53.176123 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:40:53.179741 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:40:53.179741 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:40:53.245326 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:40:53.672624 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:40:53.672624 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:40:53.676447 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:40:53.676447 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:40:53.721249 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:40:54.626192 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:40:54.626192 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:40:54.630536 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:40:54.630536 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:40:54.630536 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:40:54.630536 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:40:54.630536 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:40:54.630536 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(e): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(e): op(f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(e): op(f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(e): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 12 19:40:54.630536 ignition[813]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:40:54.675210 kernel: audit: type=1130 audit(1707766854.654:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.651573 systemd[1]: Finished ignition-files.service. Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(13): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(13): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(14): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:40:54.676508 ignition[813]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:40:54.676508 ignition[813]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:40:54.676508 ignition[813]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:40:54.676508 ignition[813]: INFO : files: files passed Feb 12 19:40:54.676508 ignition[813]: INFO : Ignition finished successfully Feb 12 19:40:54.731697 kernel: audit: type=1130 audit(1707766854.681:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.731753 kernel: audit: type=1131 audit(1707766854.681:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.731804 kernel: audit: type=1130 audit(1707766854.694:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.659904 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:40:54.668225 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:40:54.734795 initrd-setup-root-after-ignition[838]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:40:54.671144 systemd[1]: Starting ignition-quench.service... Feb 12 19:40:54.679955 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:40:54.680179 systemd[1]: Finished ignition-quench.service. Feb 12 19:40:54.689152 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:40:54.695540 systemd[1]: Reached target ignition-complete.target. Feb 12 19:40:54.705290 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:40:54.741204 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:40:54.755043 kernel: audit: type=1130 audit(1707766854.742:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.755095 kernel: audit: type=1131 audit(1707766854.742:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.741480 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:40:54.743602 systemd[1]: Reached target initrd-fs.target. Feb 12 19:40:54.755700 systemd[1]: Reached target initrd.target. Feb 12 19:40:54.757351 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:40:54.759763 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:40:54.788935 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:40:54.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.792215 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:40:54.798983 kernel: audit: type=1130 audit(1707766854.789:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.816236 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:40:54.818553 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:40:54.820818 systemd[1]: Stopped target timers.target. Feb 12 19:40:54.821793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:40:54.831963 kernel: audit: type=1131 audit(1707766854.822:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.822158 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:40:54.823604 systemd[1]: Stopped target initrd.target. Feb 12 19:40:54.832942 systemd[1]: Stopped target basic.target. Feb 12 19:40:54.834746 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:40:54.836529 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:40:54.838281 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:40:54.840074 systemd[1]: Stopped target remote-fs.target. Feb 12 19:40:54.842088 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:40:54.843558 systemd[1]: Stopped target sysinit.target. Feb 12 19:40:54.845315 systemd[1]: Stopped target local-fs.target. Feb 12 19:40:54.846961 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:40:54.848827 systemd[1]: Stopped target swap.target. Feb 12 19:40:54.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.850663 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:40:54.850916 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:40:54.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.852349 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:40:54.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.854863 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:40:54.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.855253 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:40:54.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.857189 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:40:54.857552 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:40:54.859687 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:40:54.877247 iscsid[693]: iscsid shutting down. Feb 12 19:40:54.859976 systemd[1]: Stopped ignition-files.service. Feb 12 19:40:54.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.861285 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:40:54.861569 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:40:54.865134 systemd[1]: Stopping ignition-mount.service... Feb 12 19:40:54.866395 systemd[1]: Stopping iscsid.service... Feb 12 19:40:54.875311 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:40:54.875717 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:40:54.881894 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:40:54.888986 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:40:54.889317 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:40:54.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.894954 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:40:54.895327 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:40:54.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.902585 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:40:54.902772 systemd[1]: Stopped iscsid.service. Feb 12 19:40:54.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.910271 systemd[1]: Stopping iscsiuio.service... Feb 12 19:40:54.911497 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:40:54.911673 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:40:54.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.922995 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:40:54.923182 systemd[1]: Stopped iscsiuio.service. Feb 12 19:40:54.928559 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:40:54.930752 ignition[851]: INFO : Ignition 2.14.0 Feb 12 19:40:54.930752 ignition[851]: INFO : Stage: umount Feb 12 19:40:54.930752 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:54.930752 ignition[851]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:54.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.937794 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:40:54.945493 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:54.945493 ignition[851]: INFO : umount: umount passed Feb 12 19:40:54.945493 ignition[851]: INFO : Ignition finished successfully Feb 12 19:40:54.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.937991 systemd[1]: Stopped ignition-mount.service. Feb 12 19:40:54.939114 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:40:54.939255 systemd[1]: Stopped ignition-disks.service. Feb 12 19:40:54.940041 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:40:54.940116 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:40:54.941307 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:40:54.941378 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:40:54.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.942422 systemd[1]: Stopped target network.target. Feb 12 19:40:54.944707 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:40:54.944855 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:40:54.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.946459 systemd[1]: Stopped target paths.target. Feb 12 19:40:54.947989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:40:54.951587 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:40:54.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.952559 systemd[1]: Stopped target slices.target. Feb 12 19:40:54.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.954119 systemd[1]: Stopped target sockets.target. Feb 12 19:40:54.955379 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:40:54.955514 systemd[1]: Closed iscsid.socket. Feb 12 19:40:54.956748 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:40:54.956826 systemd[1]: Closed iscsiuio.socket. Feb 12 19:40:54.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.958109 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:40:54.978000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:40:54.958218 systemd[1]: Stopped ignition-setup.service. Feb 12 19:40:54.959778 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:40:54.961441 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:40:54.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.963395 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:40:54.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.963593 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:40:54.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.965926 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:40:54.966098 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:40:54.967950 systemd-networkd[688]: eth1: DHCPv6 lease lost Feb 12 19:40:54.971091 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:40:54.971328 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:40:54.972762 systemd-networkd[688]: eth0: DHCPv6 lease lost Feb 12 19:40:55.009000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:40:54.975992 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:40:54.976203 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:40:55.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.978885 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:40:54.978961 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:40:55.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.984628 systemd[1]: Stopping network-cleanup.service... Feb 12 19:40:54.995580 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:40:54.995696 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:40:55.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.996730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:40:55.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.996811 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:40:55.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.998270 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:40:54.998336 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:40:55.003970 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:40:55.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.007601 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:40:55.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.011191 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:40:55.011477 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:40:55.015959 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:40:55.016094 systemd[1]: Stopped network-cleanup.service. Feb 12 19:40:55.017586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:40:55.017646 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:40:55.018519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:40:55.018569 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:40:55.019773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:40:55.019843 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:40:55.021252 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:40:55.021311 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:40:55.022527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:40:55.022584 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:40:55.024860 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:40:55.025879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:40:55.025960 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:40:55.036192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:40:55.036341 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:40:55.037480 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:40:55.040294 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:40:55.052998 systemd[1]: Switching root. Feb 12 19:40:55.055000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:40:55.055000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:40:55.055000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:40:55.061000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:40:55.061000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:40:55.078359 systemd-journald[184]: Journal stopped Feb 12 19:41:00.424890 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 12 19:41:00.424981 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:41:00.425004 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:41:00.425023 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:41:00.425040 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:41:00.425064 kernel: SELinux: policy capability open_perms=1 Feb 12 19:41:00.425082 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:41:00.425100 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:41:00.425129 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:41:00.425146 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:41:00.425164 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:41:00.425181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:41:00.425201 systemd[1]: Successfully loaded SELinux policy in 64.856ms. Feb 12 19:41:00.435779 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.029ms. Feb 12 19:41:00.435826 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:41:00.435854 systemd[1]: Detected virtualization kvm. Feb 12 19:41:00.435879 systemd[1]: Detected architecture x86-64. Feb 12 19:41:00.435898 systemd[1]: Detected first boot. Feb 12 19:41:00.435928 systemd[1]: Hostname set to . Feb 12 19:41:00.435957 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:41:00.435978 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:41:00.435999 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:41:00.436020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:41:00.436042 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:41:00.436068 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:41:00.436088 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:41:00.436113 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:41:00.436132 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:41:00.436151 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:41:00.436171 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 19:41:00.436190 systemd[1]: Created slice system-getty.slice. Feb 12 19:41:00.436209 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:41:00.436232 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:41:00.436253 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:41:00.436272 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:41:00.436292 systemd[1]: Created slice user.slice. Feb 12 19:41:00.436311 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:41:00.436330 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:41:00.436350 systemd[1]: Set up automount boot.automount. Feb 12 19:41:00.436370 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:41:00.436395 systemd[1]: Reached target integritysetup.target. Feb 12 19:41:00.436414 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:41:00.436450 systemd[1]: Reached target remote-fs.target. Feb 12 19:41:00.436470 systemd[1]: Reached target slices.target. Feb 12 19:41:00.436489 systemd[1]: Reached target swap.target. Feb 12 19:41:00.436507 systemd[1]: Reached target torcx.target. Feb 12 19:41:00.436528 systemd[1]: Reached target veritysetup.target. Feb 12 19:41:00.436547 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:41:00.436571 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:41:00.436591 kernel: kauditd_printk_skb: 51 callbacks suppressed Feb 12 19:41:00.436609 kernel: audit: type=1400 audit(1707766860.116:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:41:00.436629 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:41:00.436648 kernel: audit: type=1335 audit(1707766860.116:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:41:00.436666 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:41:00.436685 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:41:00.436705 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:41:00.436729 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:41:00.436748 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:41:00.436766 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:41:00.436786 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:41:00.436806 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:41:00.436824 systemd[1]: Mounting media.mount... Feb 12 19:41:00.436843 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:00.436862 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:41:00.436882 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:41:00.436900 systemd[1]: Mounting tmp.mount... Feb 12 19:41:00.436925 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:41:00.436944 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:41:00.436963 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:41:00.436982 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:41:00.437001 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:41:00.437021 systemd[1]: Starting modprobe@drm.service... Feb 12 19:41:00.437040 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:41:00.437058 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:41:00.437111 systemd[1]: Starting modprobe@loop.service... Feb 12 19:41:00.437138 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:41:00.437159 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:41:00.437177 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:41:00.437196 systemd[1]: Starting systemd-journald.service... Feb 12 19:41:00.437214 kernel: loop: module loaded Feb 12 19:41:00.437232 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:41:00.437252 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:41:00.437270 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:41:00.437294 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:41:00.437314 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:00.437332 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:41:00.437352 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:41:00.437387 systemd[1]: Mounted media.mount. Feb 12 19:41:00.437408 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:41:00.440965 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:41:00.441042 systemd[1]: Mounted tmp.mount. Feb 12 19:41:00.441065 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:41:00.441087 kernel: audit: type=1305 audit(1707766860.403:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:41:00.441118 kernel: fuse: init (API version 7.34) Feb 12 19:41:00.441138 kernel: audit: type=1300 audit(1707766860.403:93): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff7268b7c0 a2=4000 a3=7fff7268b85c items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:00.441157 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:41:00.441176 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:41:00.441195 kernel: audit: type=1327 audit(1707766860.403:93): proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:41:00.441213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:41:00.441241 kernel: audit: type=1130 audit(1707766860.417:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.441266 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:41:00.441286 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:41:00.441312 systemd-journald[994]: Journal started Feb 12 19:41:00.441463 systemd-journald[994]: Runtime Journal (/run/log/journal/c321619e97aa428d86125bb8d4173fb9) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:41:00.116000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:41:00.116000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:41:00.403000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:41:00.403000 audit[994]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff7268b7c0 a2=4000 a3=7fff7268b85c items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:00.403000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:41:00.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.444631 systemd[1]: Finished modprobe@drm.service. Feb 12 19:41:00.444702 kernel: audit: type=1130 audit(1707766860.425:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.455931 systemd[1]: Started systemd-journald.service. Feb 12 19:41:00.456026 kernel: audit: type=1131 audit(1707766860.425:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.455626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:41:00.455894 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:41:00.463024 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:41:00.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.470512 kernel: audit: type=1130 audit(1707766860.437:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.470939 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:41:00.472145 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:41:00.490911 kernel: audit: type=1131 audit(1707766860.437:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.479570 systemd[1]: Finished modprobe@loop.service. Feb 12 19:41:00.480920 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:41:00.482256 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:41:00.483599 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:41:00.484797 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:41:00.486808 systemd[1]: Reached target network-pre.target. Feb 12 19:41:00.495143 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:41:00.502982 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:41:00.503891 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:41:00.510494 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:41:00.514735 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:41:00.523634 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:41:00.525734 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:41:00.526540 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:41:00.535118 systemd-journald[994]: Time spent on flushing to /var/log/journal/c321619e97aa428d86125bb8d4173fb9 is 111.787ms for 1107 entries. Feb 12 19:41:00.535118 systemd-journald[994]: System Journal (/var/log/journal/c321619e97aa428d86125bb8d4173fb9) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:41:00.664514 systemd-journald[994]: Received client request to flush runtime journal. Feb 12 19:41:00.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.535504 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:41:00.541745 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:41:00.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.549208 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:41:00.552766 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:41:00.576539 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:41:00.673964 udevadm[1048]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:41:00.577302 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:41:00.590538 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:41:00.611557 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:41:00.614357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:41:00.648030 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:41:00.651019 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:41:00.665967 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:41:00.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:00.685878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:41:01.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:01.726115 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:41:01.729187 systemd[1]: Starting systemd-udevd.service... Feb 12 19:41:01.769088 systemd-udevd[1054]: Using default interface naming scheme 'v252'. Feb 12 19:41:01.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:01.840030 systemd[1]: Started systemd-udevd.service. Feb 12 19:41:01.843924 systemd[1]: Starting systemd-networkd.service... Feb 12 19:41:01.858514 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:41:01.936520 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:41:01.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:01.974614 systemd[1]: Started systemd-userdbd.service. Feb 12 19:41:02.032023 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:02.032442 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:41:02.036605 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:41:02.041527 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:41:02.044323 systemd[1]: Starting modprobe@loop.service... Feb 12 19:41:02.049095 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:41:02.049281 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:41:02.051544 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:02.052407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:41:02.052756 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:41:02.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.059847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:41:02.060157 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:41:02.068022 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:41:02.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.070136 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:41:02.070542 systemd[1]: Finished modprobe@loop.service. Feb 12 19:41:02.071545 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:41:02.165463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:41:02.170471 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:41:02.182991 systemd-networkd[1061]: lo: Link UP Feb 12 19:41:02.183006 systemd-networkd[1061]: lo: Gained carrier Feb 12 19:41:02.184097 systemd-networkd[1061]: Enumeration completed Feb 12 19:41:02.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.184245 systemd-networkd[1061]: eth1: Configuring with /run/systemd/network/10-b6:31:f9:34:25:b7.network. Feb 12 19:41:02.184339 systemd[1]: Started systemd-networkd.service. Feb 12 19:41:02.187159 systemd-networkd[1061]: eth0: Configuring with /run/systemd/network/10-1e:99:ec:e4:fc:4f.network. Feb 12 19:41:02.188720 systemd-networkd[1061]: eth1: Link UP Feb 12 19:41:02.188735 systemd-networkd[1061]: eth1: Gained carrier Feb 12 19:41:02.197551 systemd-networkd[1061]: eth0: Link UP Feb 12 19:41:02.197564 systemd-networkd[1061]: eth0: Gained carrier Feb 12 19:41:02.210859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:41:02.231000 audit[1066]: AVC avc: denied { confidentiality } for pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:41:02.231000 audit[1066]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564e384f4060 a1=32194 a2=7f81e5478bc5 a3=5 items=108 ppid=1054 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:02.231000 audit: CWD cwd="/" Feb 12 19:41:02.231000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=1 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=2 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=3 name=(null) inode=14001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=4 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=5 name=(null) inode=14002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=6 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=7 name=(null) inode=14003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=8 name=(null) inode=14003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=9 name=(null) inode=14004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=10 name=(null) inode=14003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=11 name=(null) inode=14005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=12 name=(null) inode=14003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=13 name=(null) inode=14006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=14 name=(null) inode=14003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=15 name=(null) inode=14007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=16 name=(null) inode=14003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=17 name=(null) inode=14008 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=18 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=19 name=(null) inode=14009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=20 name=(null) inode=14009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=21 name=(null) inode=14010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=22 name=(null) inode=14009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=23 name=(null) inode=14011 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=24 name=(null) inode=14009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=25 name=(null) inode=14012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=26 name=(null) inode=14009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=27 name=(null) inode=14013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=28 name=(null) inode=14009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=29 name=(null) inode=14014 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=30 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=31 name=(null) inode=14015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=32 name=(null) inode=14015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=33 name=(null) inode=14016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=34 name=(null) inode=14015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=35 name=(null) inode=14017 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=36 name=(null) inode=14015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=37 name=(null) inode=14018 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=38 name=(null) inode=14015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=39 name=(null) inode=14019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=40 name=(null) inode=14015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=41 name=(null) inode=14020 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=42 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=43 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=44 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=45 name=(null) inode=14022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=46 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=47 name=(null) inode=14023 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=48 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=49 name=(null) inode=14024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=50 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=51 name=(null) inode=14025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=52 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=53 name=(null) inode=14026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=55 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=56 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=57 name=(null) inode=14028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=58 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=59 name=(null) inode=14029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=60 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=61 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=62 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=63 name=(null) inode=14031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=64 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=65 name=(null) inode=14032 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=66 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=67 name=(null) inode=14033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=68 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=69 name=(null) inode=14034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=70 name=(null) inode=14030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=71 name=(null) inode=14035 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=72 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=73 name=(null) inode=14036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=74 name=(null) inode=14036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=75 name=(null) inode=14037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=76 name=(null) inode=14036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=77 name=(null) inode=14038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=78 name=(null) inode=14036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=79 name=(null) inode=14039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=80 name=(null) inode=14036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=81 name=(null) inode=14040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=82 name=(null) inode=14036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=83 name=(null) inode=14041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=84 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=85 name=(null) inode=14042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=86 name=(null) inode=14042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=87 name=(null) inode=14043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=88 name=(null) inode=14042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=89 name=(null) inode=14044 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=90 name=(null) inode=14042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=91 name=(null) inode=14045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=92 name=(null) inode=14042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=93 name=(null) inode=14046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=94 name=(null) inode=14042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=95 name=(null) inode=14047 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=96 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=97 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=98 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=99 name=(null) inode=14049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=100 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=101 name=(null) inode=14050 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=102 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=103 name=(null) inode=14051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=104 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=105 name=(null) inode=14052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=106 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PATH item=107 name=(null) inode=14053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:02.231000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:41:02.313472 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 19:41:02.320480 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:41:02.330491 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:41:02.517475 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:41:02.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.559244 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:41:02.562346 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:41:02.604573 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:41:02.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.642772 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:41:02.644355 systemd[1]: Reached target cryptsetup.target. Feb 12 19:41:02.651245 systemd[1]: Starting lvm2-activation.service... Feb 12 19:41:02.660814 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:41:02.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.701494 systemd[1]: Finished lvm2-activation.service. Feb 12 19:41:02.702533 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:41:02.707692 systemd[1]: Mounting media-configdrive.mount... Feb 12 19:41:02.708869 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:41:02.708964 systemd[1]: Reached target machines.target. Feb 12 19:41:02.712217 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:41:02.734196 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:41:02.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.744499 kernel: ISO 9660 Extensions: RRIP_1991A Feb 12 19:41:02.744897 systemd[1]: Mounted media-configdrive.mount. Feb 12 19:41:02.746044 systemd[1]: Reached target local-fs.target. Feb 12 19:41:02.749694 systemd[1]: Starting ldconfig.service... Feb 12 19:41:02.752392 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:41:02.752519 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:41:02.755103 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:41:02.761895 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:41:02.764797 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:41:02.765758 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:41:02.774187 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:41:02.775997 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Feb 12 19:41:02.784089 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:41:02.803139 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:41:02.818447 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:41:02.823438 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:41:02.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:02.936130 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:41:02.937340 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:41:03.091163 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31) Feb 12 19:41:03.091163 systemd-fsck[1115]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 19:41:03.105549 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:41:03.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.110813 systemd[1]: Mounting boot.mount... Feb 12 19:41:03.177159 systemd[1]: Mounted boot.mount. Feb 12 19:41:03.260863 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:41:03.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.468569 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:41:03.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.472193 systemd[1]: Starting audit-rules.service... Feb 12 19:41:03.476114 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:41:03.486736 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:41:03.492406 systemd[1]: Starting systemd-resolved.service... Feb 12 19:41:03.503707 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:41:03.516601 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:41:03.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.520027 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:41:03.522572 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:41:03.554000 audit[1133]: SYSTEM_BOOT pid=1133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.560092 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:41:03.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.595148 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:41:03.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.643000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:41:03.643000 audit[1146]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffede812cd0 a2=420 a3=0 items=0 ppid=1123 pid=1146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:03.643000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:41:03.644566 augenrules[1146]: No rules Feb 12 19:41:03.645946 systemd[1]: Finished audit-rules.service. Feb 12 19:41:03.702823 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:41:03.703826 systemd[1]: Reached target time-set.target. Feb 12 19:41:03.745683 systemd-timesyncd[1129]: Contacted time server 23.141.40.124:123 (0.flatcar.pool.ntp.org). Feb 12 19:41:03.746533 systemd-timesyncd[1129]: Initial clock synchronization to Mon 2024-02-12 19:41:03.380039 UTC. Feb 12 19:41:03.747905 systemd-resolved[1126]: Positive Trust Anchors: Feb 12 19:41:03.747932 systemd-resolved[1126]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:41:03.747979 systemd-resolved[1126]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:41:03.761741 systemd-resolved[1126]: Using system hostname 'ci-3510.3.2-d-fc9a4b050f'. Feb 12 19:41:03.765555 systemd[1]: Started systemd-resolved.service. Feb 12 19:41:03.766518 systemd[1]: Reached target network.target. Feb 12 19:41:03.767361 systemd[1]: Reached target nss-lookup.target. Feb 12 19:41:03.911076 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:41:03.913910 systemd-networkd[1061]: eth1: Gained IPv6LL Feb 12 19:41:03.931286 systemd[1]: Finished ldconfig.service. Feb 12 19:41:03.934416 systemd[1]: Starting systemd-update-done.service... Feb 12 19:41:03.949820 systemd[1]: Finished systemd-update-done.service. Feb 12 19:41:03.950761 systemd[1]: Reached target sysinit.target. Feb 12 19:41:03.951602 systemd[1]: Started motdgen.path. Feb 12 19:41:03.952255 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:41:03.953322 systemd[1]: Started logrotate.timer. Feb 12 19:41:03.954126 systemd[1]: Started mdadm.timer. Feb 12 19:41:03.954946 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:41:03.955613 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:41:03.955660 systemd[1]: Reached target paths.target. Feb 12 19:41:03.956249 systemd[1]: Reached target timers.target. Feb 12 19:41:03.957372 systemd[1]: Listening on dbus.socket. Feb 12 19:41:03.960212 systemd[1]: Starting docker.socket... Feb 12 19:41:03.963252 systemd[1]: Listening on sshd.socket. Feb 12 19:41:03.964369 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:41:03.965409 systemd[1]: Listening on docker.socket. Feb 12 19:41:03.966385 systemd[1]: Reached target sockets.target. Feb 12 19:41:03.967274 systemd[1]: Reached target basic.target. Feb 12 19:41:03.968295 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:41:03.968517 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:41:03.968665 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:41:03.971401 systemd[1]: Starting containerd.service... Feb 12 19:41:03.974420 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 19:41:03.977597 systemd[1]: Starting dbus.service... Feb 12 19:41:03.981152 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:41:03.990258 systemd[1]: Starting extend-filesystems.service... Feb 12 19:41:03.991210 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:41:03.993732 systemd[1]: Starting motdgen.service... Feb 12 19:41:04.033538 jq[1162]: false Feb 12 19:41:04.000411 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:41:04.005821 systemd[1]: Starting prepare-critools.service... Feb 12 19:41:04.012500 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:41:04.018467 systemd[1]: Starting sshd-keygen.service... Feb 12 19:41:04.036397 systemd[1]: Starting systemd-logind.service... Feb 12 19:41:04.037212 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:41:04.037321 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:41:04.039583 systemd[1]: Starting update-engine.service... Feb 12 19:41:04.042404 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:41:04.043583 systemd-networkd[1061]: eth0: Gained IPv6LL Feb 12 19:41:04.050772 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:41:04.051367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:41:04.084791 tar[1182]: ./ Feb 12 19:41:04.084791 tar[1182]: ./macvlan Feb 12 19:41:04.070837 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:41:04.071189 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:41:04.094297 tar[1183]: crictl Feb 12 19:41:04.094930 jq[1179]: true Feb 12 19:41:04.123865 jq[1187]: true Feb 12 19:41:04.152699 dbus-daemon[1161]: [system] SELinux support is enabled Feb 12 19:41:04.154311 systemd[1]: Started dbus.service. Feb 12 19:41:04.159491 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:41:04.159535 systemd[1]: Reached target system-config.target. Feb 12 19:41:04.160327 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:41:04.164096 systemd[1]: Starting user-configdrive.service... Feb 12 19:41:04.192467 extend-filesystems[1165]: Found vda Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda1 Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda2 Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda3 Feb 12 19:41:04.194780 extend-filesystems[1165]: Found usr Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda4 Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda6 Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda7 Feb 12 19:41:04.194780 extend-filesystems[1165]: Found vda9 Feb 12 19:41:04.194780 extend-filesystems[1165]: Checking size of /dev/vda9 Feb 12 19:41:04.214327 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:41:04.214764 systemd[1]: Finished motdgen.service. Feb 12 19:41:04.286744 extend-filesystems[1165]: Resized partition /dev/vda9 Feb 12 19:41:04.295583 update_engine[1178]: I0212 19:41:04.294339 1178 main.cc:92] Flatcar Update Engine starting Feb 12 19:41:04.310518 systemd[1]: Started update-engine.service. Feb 12 19:41:04.315513 systemd[1]: Started locksmithd.service. Feb 12 19:41:04.319500 update_engine[1178]: I0212 19:41:04.319447 1178 update_check_scheduler.cc:74] Next update check in 5m12s Feb 12 19:41:04.329819 extend-filesystems[1230]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:41:04.333163 coreos-cloudinit[1201]: 2024/02/12 19:41:04 Checking availability of "cloud-drive" Feb 12 19:41:04.333163 coreos-cloudinit[1201]: 2024/02/12 19:41:04 Fetching user-data from datasource of type "cloud-drive" Feb 12 19:41:04.333163 coreos-cloudinit[1201]: 2024/02/12 19:41:04 Attempting to read from "/media/configdrive/openstack/latest/user_data" Feb 12 19:41:04.334599 coreos-cloudinit[1201]: 2024/02/12 19:41:04 Fetching meta-data from datasource of type "cloud-drive" Feb 12 19:41:04.334599 coreos-cloudinit[1201]: 2024/02/12 19:41:04 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json" Feb 12 19:41:04.345496 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 12 19:41:04.350722 coreos-cloudinit[1201]: Detected an Ignition config. Exiting... Feb 12 19:41:04.351557 systemd[1]: Finished user-configdrive.service. Feb 12 19:41:04.352490 systemd[1]: Reached target user-config.target. Feb 12 19:41:04.426937 systemd-logind[1174]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:41:04.426980 systemd-logind[1174]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:41:04.427950 systemd-logind[1174]: New seat seat0. Feb 12 19:41:04.435150 bash[1227]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:41:04.435681 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:41:04.437754 systemd[1]: Started systemd-logind.service. Feb 12 19:41:04.453646 env[1188]: time="2024-02-12T19:41:04.453011793Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:41:04.488539 tar[1182]: ./static Feb 12 19:41:04.498205 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 12 19:41:04.554357 env[1188]: time="2024-02-12T19:41:04.551186762Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:41:04.554357 env[1188]: time="2024-02-12T19:41:04.551710748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:04.554357 env[1188]: time="2024-02-12T19:41:04.553541108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:41:04.554357 env[1188]: time="2024-02-12T19:41:04.553597361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:04.555477 extend-filesystems[1230]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:41:04.555477 extend-filesystems[1230]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 12 19:41:04.555477 extend-filesystems[1230]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 12 19:41:04.561756 extend-filesystems[1165]: Resized filesystem in /dev/vda9 Feb 12 19:41:04.561756 extend-filesystems[1165]: Found vdb Feb 12 19:41:04.556559 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:41:04.556945 systemd[1]: Finished extend-filesystems.service. Feb 12 19:41:04.570464 coreos-metadata[1160]: Feb 12 19:41:04.570 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:41:04.570464 coreos-metadata[1160]: Feb 12 19:41:04.570 INFO Fetch successful Feb 12 19:41:04.575263 env[1188]: time="2024-02-12T19:41:04.575157451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:41:04.575263 env[1188]: time="2024-02-12T19:41:04.575252423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:04.575540 env[1188]: time="2024-02-12T19:41:04.575286716Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:41:04.575540 env[1188]: time="2024-02-12T19:41:04.575305563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:04.575662 env[1188]: time="2024-02-12T19:41:04.575569702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:04.575959 env[1188]: time="2024-02-12T19:41:04.575927085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:04.576303 env[1188]: time="2024-02-12T19:41:04.576259084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:41:04.576363 env[1188]: time="2024-02-12T19:41:04.576301051Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:41:04.576416 env[1188]: time="2024-02-12T19:41:04.576393903Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:41:04.576416 env[1188]: time="2024-02-12T19:41:04.576409681Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:41:04.591480 unknown[1160]: wrote ssh authorized keys file for user: core Feb 12 19:41:04.610149 env[1188]: time="2024-02-12T19:41:04.610024137Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:41:04.610342 env[1188]: time="2024-02-12T19:41:04.610175741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:41:04.610342 env[1188]: time="2024-02-12T19:41:04.610199547Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:41:04.610342 env[1188]: time="2024-02-12T19:41:04.610269589Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610491 env[1188]: time="2024-02-12T19:41:04.610341224Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610491 env[1188]: time="2024-02-12T19:41:04.610371141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610491 env[1188]: time="2024-02-12T19:41:04.610389916Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610491 env[1188]: time="2024-02-12T19:41:04.610409327Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610491 env[1188]: time="2024-02-12T19:41:04.610468168Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610491 env[1188]: time="2024-02-12T19:41:04.610487860Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610703 env[1188]: time="2024-02-12T19:41:04.610505808Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.610703 env[1188]: time="2024-02-12T19:41:04.610523607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:41:04.610785 env[1188]: time="2024-02-12T19:41:04.610713702Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:41:04.610869 env[1188]: time="2024-02-12T19:41:04.610844251Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:41:04.611459 env[1188]: time="2024-02-12T19:41:04.611406701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:41:04.611574 env[1188]: time="2024-02-12T19:41:04.611468641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611574 env[1188]: time="2024-02-12T19:41:04.611489138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:41:04.611696 env[1188]: time="2024-02-12T19:41:04.611577920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611770 env[1188]: time="2024-02-12T19:41:04.611700441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611770 env[1188]: time="2024-02-12T19:41:04.611719882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611770 env[1188]: time="2024-02-12T19:41:04.611760569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611942 env[1188]: time="2024-02-12T19:41:04.611787683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611942 env[1188]: time="2024-02-12T19:41:04.611809470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611942 env[1188]: time="2024-02-12T19:41:04.611845481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611942 env[1188]: time="2024-02-12T19:41:04.611863406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.611942 env[1188]: time="2024-02-12T19:41:04.611882927Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:41:04.612161 env[1188]: time="2024-02-12T19:41:04.612102204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.612161 env[1188]: time="2024-02-12T19:41:04.612146018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.612244 env[1188]: time="2024-02-12T19:41:04.612169386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.612244 env[1188]: time="2024-02-12T19:41:04.612185847Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:41:04.612244 env[1188]: time="2024-02-12T19:41:04.612220448Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:41:04.612244 env[1188]: time="2024-02-12T19:41:04.612236971Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:41:04.612389 env[1188]: time="2024-02-12T19:41:04.612279289Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:41:04.612389 env[1188]: time="2024-02-12T19:41:04.612328372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:41:04.612767 env[1188]: time="2024-02-12T19:41:04.612656473Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:41:04.612767 env[1188]: time="2024-02-12T19:41:04.612767233Z" level=info msg="Connect containerd service" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.612825306Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.613891915Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.614049109Z" level=info msg="Start subscribing containerd event" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.614131582Z" level=info msg="Start recovering state" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.614236459Z" level=info msg="Start event monitor" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.614852085Z" level=info msg="Start snapshots syncer" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.614885002Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.614909933Z" level=info msg="Start streaming server" Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.616305457Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:41:04.617729 env[1188]: time="2024-02-12T19:41:04.616452034Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:41:04.624008 systemd[1]: Started containerd.service. Feb 12 19:41:04.630160 update-ssh-keys[1241]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:41:04.630642 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 19:41:04.646789 tar[1182]: ./vlan Feb 12 19:41:04.656233 env[1188]: time="2024-02-12T19:41:04.643221536Z" level=info msg="containerd successfully booted in 0.206525s" Feb 12 19:41:04.721280 tar[1182]: ./portmap Feb 12 19:41:04.798049 tar[1182]: ./host-local Feb 12 19:41:04.866725 tar[1182]: ./vrf Feb 12 19:41:05.100663 tar[1182]: ./bridge Feb 12 19:41:05.389352 tar[1182]: ./tuning Feb 12 19:41:05.471765 tar[1182]: ./firewall Feb 12 19:41:05.562101 tar[1182]: ./host-device Feb 12 19:41:05.645127 tar[1182]: ./sbr Feb 12 19:41:05.720571 tar[1182]: ./loopback Feb 12 19:41:05.766594 tar[1182]: ./dhcp Feb 12 19:41:05.853096 systemd[1]: Finished prepare-critools.service. Feb 12 19:41:05.918039 tar[1182]: ./ptp Feb 12 19:41:05.971287 tar[1182]: ./ipvlan Feb 12 19:41:06.022373 tar[1182]: ./bandwidth Feb 12 19:41:06.096491 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:41:06.145032 locksmithd[1231]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:41:06.953798 sshd_keygen[1198]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:41:06.999003 systemd[1]: Finished sshd-keygen.service. Feb 12 19:41:07.002507 systemd[1]: Starting issuegen.service... Feb 12 19:41:07.014696 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:41:07.015059 systemd[1]: Finished issuegen.service. Feb 12 19:41:07.018658 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:41:07.031029 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:41:07.035085 systemd[1]: Started getty@tty1.service. Feb 12 19:41:07.038964 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:41:07.040417 systemd[1]: Reached target getty.target. Feb 12 19:41:07.041670 systemd[1]: Reached target multi-user.target. Feb 12 19:41:07.045215 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:41:07.058121 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:41:07.058678 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:41:07.060346 systemd[1]: Startup finished in 8.707s (kernel) + 11.745s (userspace) = 20.452s. Feb 12 19:41:13.336676 systemd[1]: Created slice system-sshd.slice. Feb 12 19:41:13.338889 systemd[1]: Started sshd@0-143.198.151.132:22-139.178.68.195:37088.service. Feb 12 19:41:13.419710 sshd[1277]: Accepted publickey for core from 139.178.68.195 port 37088 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:13.423144 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:13.442244 systemd[1]: Created slice user-500.slice. Feb 12 19:41:13.444165 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:41:13.448669 systemd-logind[1174]: New session 1 of user core. Feb 12 19:41:13.464612 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:41:13.469137 systemd[1]: Starting user@500.service... Feb 12 19:41:13.477133 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:13.605814 systemd[1282]: Queued start job for default target default.target. Feb 12 19:41:13.607345 systemd[1282]: Reached target paths.target. Feb 12 19:41:13.607615 systemd[1282]: Reached target sockets.target. Feb 12 19:41:13.607848 systemd[1282]: Reached target timers.target. Feb 12 19:41:13.607998 systemd[1282]: Reached target basic.target. Feb 12 19:41:13.608190 systemd[1282]: Reached target default.target. Feb 12 19:41:13.608357 systemd[1]: Started user@500.service. Feb 12 19:41:13.609530 systemd[1]: Started session-1.scope. Feb 12 19:41:13.611532 systemd[1282]: Startup finished in 123ms. Feb 12 19:41:13.676689 systemd[1]: Started sshd@1-143.198.151.132:22-139.178.68.195:37102.service. Feb 12 19:41:13.733366 sshd[1291]: Accepted publickey for core from 139.178.68.195 port 37102 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:13.735609 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:13.742725 systemd-logind[1174]: New session 2 of user core. Feb 12 19:41:13.744209 systemd[1]: Started session-2.scope. Feb 12 19:41:13.821577 sshd[1291]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:13.828180 systemd[1]: Started sshd@2-143.198.151.132:22-139.178.68.195:37116.service. Feb 12 19:41:13.829488 systemd[1]: sshd@1-143.198.151.132:22-139.178.68.195:37102.service: Deactivated successfully. Feb 12 19:41:13.831236 systemd-logind[1174]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:41:13.832233 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:41:13.833158 systemd-logind[1174]: Removed session 2. Feb 12 19:41:13.897575 sshd[1297]: Accepted publickey for core from 139.178.68.195 port 37116 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:13.900066 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:13.907518 systemd-logind[1174]: New session 3 of user core. Feb 12 19:41:13.908524 systemd[1]: Started session-3.scope. Feb 12 19:41:13.972141 sshd[1297]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:13.977621 systemd[1]: Started sshd@3-143.198.151.132:22-139.178.68.195:37122.service. Feb 12 19:41:13.984076 systemd[1]: sshd@2-143.198.151.132:22-139.178.68.195:37116.service: Deactivated successfully. Feb 12 19:41:13.985602 systemd-logind[1174]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:41:13.985741 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:41:13.990665 systemd-logind[1174]: Removed session 3. Feb 12 19:41:14.038790 sshd[1303]: Accepted publickey for core from 139.178.68.195 port 37122 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:14.041301 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:14.048514 systemd-logind[1174]: New session 4 of user core. Feb 12 19:41:14.050110 systemd[1]: Started session-4.scope. Feb 12 19:41:14.120238 sshd[1303]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:14.126140 systemd[1]: Started sshd@4-143.198.151.132:22-139.178.68.195:37134.service. Feb 12 19:41:14.127316 systemd[1]: sshd@3-143.198.151.132:22-139.178.68.195:37122.service: Deactivated successfully. Feb 12 19:41:14.134896 systemd-logind[1174]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:41:14.135084 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:41:14.137738 systemd-logind[1174]: Removed session 4. Feb 12 19:41:14.184875 sshd[1311]: Accepted publickey for core from 139.178.68.195 port 37134 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:14.187531 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:14.196346 systemd-logind[1174]: New session 5 of user core. Feb 12 19:41:14.196797 systemd[1]: Started session-5.scope. Feb 12 19:41:14.282964 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 19:41:14.283888 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:41:14.294879 dbus-daemon[1161]: \xd0\xed5F\xdeU: received setenforce notice (enforcing=82646672) Feb 12 19:41:14.297745 sudo[1316]: pam_unix(sudo:session): session closed for user root Feb 12 19:41:14.304595 sshd[1311]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:14.310661 systemd[1]: Started sshd@5-143.198.151.132:22-139.178.68.195:37150.service. Feb 12 19:41:14.316240 systemd[1]: sshd@4-143.198.151.132:22-139.178.68.195:37134.service: Deactivated successfully. Feb 12 19:41:14.317672 systemd-logind[1174]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:41:14.317783 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:41:14.322781 systemd-logind[1174]: Removed session 5. Feb 12 19:41:14.372502 sshd[1318]: Accepted publickey for core from 139.178.68.195 port 37150 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:14.375470 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:14.383406 systemd-logind[1174]: New session 6 of user core. Feb 12 19:41:14.384577 systemd[1]: Started session-6.scope. Feb 12 19:41:14.455553 sudo[1325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 19:41:14.457266 sudo[1325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:41:14.462900 sudo[1325]: pam_unix(sudo:session): session closed for user root Feb 12 19:41:14.471676 sudo[1324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 19:41:14.472030 sudo[1324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:41:14.489037 systemd[1]: Stopping audit-rules.service... Feb 12 19:41:14.490000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:41:14.491913 auditctl[1328]: No rules Feb 12 19:41:14.492720 kernel: kauditd_printk_skb: 156 callbacks suppressed Feb 12 19:41:14.492828 kernel: audit: type=1305 audit(1707766874.490:142): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:41:14.496846 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 19:41:14.497257 systemd[1]: Stopped audit-rules.service. Feb 12 19:41:14.490000 audit[1328]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffca6a5ec60 a2=420 a3=0 items=0 ppid=1 pid=1328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:14.508452 kernel: audit: type=1300 audit(1707766874.490:142): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffca6a5ec60 a2=420 a3=0 items=0 ppid=1 pid=1328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:14.501897 systemd[1]: Starting audit-rules.service... Feb 12 19:41:14.490000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:41:14.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.517555 kernel: audit: type=1327 audit(1707766874.490:142): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:41:14.517726 kernel: audit: type=1131 audit(1707766874.496:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.541259 augenrules[1346]: No rules Feb 12 19:41:14.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.544405 sudo[1324]: pam_unix(sudo:session): session closed for user root Feb 12 19:41:14.542478 systemd[1]: Finished audit-rules.service. Feb 12 19:41:14.550468 kernel: audit: type=1130 audit(1707766874.541:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.544000 audit[1324]: USER_END pid=1324 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.558648 kernel: audit: type=1106 audit(1707766874.544:145): pid=1324 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.559011 sshd[1318]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:14.544000 audit[1324]: CRED_DISP pid=1324 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.563663 systemd[1]: Started sshd@6-143.198.151.132:22-139.178.68.195:37152.service. Feb 12 19:41:14.560000 audit[1318]: USER_END pid=1318 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.565810 systemd[1]: sshd@5-143.198.151.132:22-139.178.68.195:37150.service: Deactivated successfully. Feb 12 19:41:14.574481 kernel: audit: type=1104 audit(1707766874.544:146): pid=1324 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.574647 kernel: audit: type=1106 audit(1707766874.560:147): pid=1318 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.575220 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:41:14.575877 systemd-logind[1174]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:41:14.578328 systemd-logind[1174]: Removed session 6. Feb 12 19:41:14.560000 audit[1318]: CRED_DISP pid=1318 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.590478 kernel: audit: type=1104 audit(1707766874.560:148): pid=1318 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-143.198.151.132:22-139.178.68.195:37152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-143.198.151.132:22-139.178.68.195:37150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.598517 kernel: audit: type=1130 audit(1707766874.563:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-143.198.151.132:22-139.178.68.195:37152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.637000 audit[1351]: USER_ACCT pid=1351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.638069 sshd[1351]: Accepted publickey for core from 139.178.68.195 port 37152 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:14.639000 audit[1351]: CRED_ACQ pid=1351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.639000 audit[1351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef72a3990 a2=3 a3=0 items=0 ppid=1 pid=1351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:14.639000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:41:14.641116 sshd[1351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:14.651838 systemd[1]: Started session-7.scope. Feb 12 19:41:14.652407 systemd-logind[1174]: New session 7 of user core. Feb 12 19:41:14.662000 audit[1351]: USER_START pid=1351 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.664000 audit[1356]: CRED_ACQ pid=1356 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:14.720000 audit[1357]: USER_ACCT pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.721397 sudo[1357]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:41:14.721000 audit[1357]: CRED_REFR pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:14.722823 sudo[1357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:41:14.726000 audit[1357]: USER_START pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:15.349897 systemd[1]: Reloading. Feb 12 19:41:15.459242 /usr/lib/systemd/system-generators/torcx-generator[1386]: time="2024-02-12T19:41:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:41:15.462636 /usr/lib/systemd/system-generators/torcx-generator[1386]: time="2024-02-12T19:41:15Z" level=info msg="torcx already run" Feb 12 19:41:15.617790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:41:15.618360 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:41:15.661764 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:41:15.834655 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:41:15.844873 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:41:15.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:15.847775 systemd[1]: Reached target network-online.target. Feb 12 19:41:15.851794 systemd[1]: Started kubelet.service. Feb 12 19:41:15.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:15.873639 systemd[1]: Starting coreos-metadata.service... Feb 12 19:41:15.948111 coreos-metadata[1448]: Feb 12 19:41:15.947 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:41:15.968538 coreos-metadata[1448]: Feb 12 19:41:15.968 INFO Fetch successful Feb 12 19:41:15.977009 kubelet[1440]: E0212 19:41:15.976929 1440 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:41:15.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:41:15.982356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:41:15.982602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:41:15.993841 systemd[1]: Finished coreos-metadata.service. Feb 12 19:41:15.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:16.598987 systemd[1]: Stopped kubelet.service. Feb 12 19:41:16.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:16.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:16.626086 systemd[1]: Reloading. Feb 12 19:41:16.742169 /usr/lib/systemd/system-generators/torcx-generator[1509]: time="2024-02-12T19:41:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:41:16.742774 /usr/lib/systemd/system-generators/torcx-generator[1509]: time="2024-02-12T19:41:16Z" level=info msg="torcx already run" Feb 12 19:41:16.886391 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:41:16.886447 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:41:16.918279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:41:17.057934 systemd[1]: Started kubelet.service. Feb 12 19:41:17.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:17.142295 kubelet[1560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:41:17.142951 kubelet[1560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:41:17.143189 kubelet[1560]: I0212 19:41:17.143147 1560 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:41:17.145264 kubelet[1560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:41:17.145417 kubelet[1560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:41:17.504402 kubelet[1560]: I0212 19:41:17.504251 1560 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:41:17.504402 kubelet[1560]: I0212 19:41:17.504299 1560 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:41:17.505340 kubelet[1560]: I0212 19:41:17.505293 1560 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:41:17.508968 kubelet[1560]: I0212 19:41:17.508931 1560 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:41:17.512981 kubelet[1560]: I0212 19:41:17.512939 1560 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:41:17.513821 kubelet[1560]: I0212 19:41:17.513705 1560 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:41:17.513964 kubelet[1560]: I0212 19:41:17.513891 1560 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:41:17.514136 kubelet[1560]: I0212 19:41:17.513968 1560 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:41:17.514136 kubelet[1560]: I0212 19:41:17.513990 1560 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:41:17.514278 kubelet[1560]: I0212 19:41:17.514230 1560 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:41:17.519622 kubelet[1560]: I0212 19:41:17.519576 1560 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:41:17.519622 kubelet[1560]: I0212 19:41:17.519619 1560 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:41:17.519863 kubelet[1560]: I0212 19:41:17.519686 1560 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:41:17.519863 kubelet[1560]: I0212 19:41:17.519723 1560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:41:17.520459 kubelet[1560]: E0212 19:41:17.520401 1560 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:17.520522 kubelet[1560]: E0212 19:41:17.520492 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:17.521835 kubelet[1560]: I0212 19:41:17.521801 1560 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:41:17.522375 kubelet[1560]: W0212 19:41:17.522335 1560 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:41:17.523656 kubelet[1560]: I0212 19:41:17.523619 1560 server.go:1186] "Started kubelet" Feb 12 19:41:17.525699 kubelet[1560]: I0212 19:41:17.525602 1560 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:41:17.524000 audit[1560]: AVC avc: denied { mac_admin } for pid=1560 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:41:17.524000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:41:17.524000 audit[1560]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c201b0 a1=c000c380a8 a2=c000c20180 a3=25 items=0 ppid=1 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.524000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:41:17.524000 audit[1560]: AVC avc: denied { mac_admin } for pid=1560 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:41:17.524000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:41:17.524000 audit[1560]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000868e00 a1=c000c380c0 a2=c000c20240 a3=25 items=0 ppid=1 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.524000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:41:17.526766 kubelet[1560]: I0212 19:41:17.526175 1560 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:41:17.526766 kubelet[1560]: I0212 19:41:17.526227 1560 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:41:17.526766 kubelet[1560]: I0212 19:41:17.526311 1560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:41:17.527917 kubelet[1560]: I0212 19:41:17.527887 1560 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:41:17.535053 kubelet[1560]: E0212 19:41:17.535011 1560 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:41:17.535296 kubelet[1560]: E0212 19:41:17.535269 1560 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:41:17.543455 kubelet[1560]: E0212 19:41:17.543259 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed4df871e3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 523571171, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 523571171, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.544178 kubelet[1560]: W0212 19:41:17.544124 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:17.544375 kubelet[1560]: E0212 19:41:17.544357 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:17.544790 kubelet[1560]: W0212 19:41:17.544765 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:17.544916 kubelet[1560]: E0212 19:41:17.544899 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:17.547717 kubelet[1560]: I0212 19:41:17.547672 1560 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:41:17.548309 kubelet[1560]: E0212 19:41:17.548173 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed4eaa8631", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 535241777, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 535241777, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.557489 kubelet[1560]: I0212 19:41:17.548340 1560 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:41:17.558278 kubelet[1560]: W0212 19:41:17.558225 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:17.558770 kubelet[1560]: E0212 19:41:17.558729 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:17.558933 kubelet[1560]: E0212 19:41:17.558600 1560 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "143.198.151.132" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:41:17.582000 audit[1572]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.582000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc1a905350 a2=0 a3=7ffc1a90533c items=0 ppid=1560 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:41:17.584000 audit[1577]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.584000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd5cf784e0 a2=0 a3=7ffd5cf784cc items=0 ppid=1560 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:41:17.588000 audit[1579]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.588000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7e852c70 a2=0 a3=7ffc7e852c5c items=0 ppid=1560 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:41:17.617000 audit[1584]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.617000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc9742a5c0 a2=0 a3=7ffc9742a5ac items=0 ppid=1560 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:41:17.629896 kubelet[1560]: I0212 19:41:17.629865 1560 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:41:17.630167 kubelet[1560]: I0212 19:41:17.630140 1560 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:41:17.630546 kubelet[1560]: I0212 19:41:17.630523 1560 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:41:17.631171 kubelet[1560]: E0212 19:41:17.631067 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.633538 kubelet[1560]: E0212 19:41:17.633361 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.635037 kubelet[1560]: E0212 19:41:17.634899 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.635838 kubelet[1560]: I0212 19:41:17.635805 1560 policy_none.go:49] "None policy: Start" Feb 12 19:41:17.637253 kubelet[1560]: I0212 19:41:17.637060 1560 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:41:17.637421 kubelet[1560]: I0212 19:41:17.637269 1560 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:41:17.645000 audit[1560]: AVC avc: denied { mac_admin } for pid=1560 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:41:17.645000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:41:17.645000 audit[1560]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001012db0 a1=c0010160d8 a2=c001012d80 a3=25 items=0 ppid=1 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.645000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:41:17.647475 kubelet[1560]: I0212 19:41:17.646970 1560 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:41:17.647475 kubelet[1560]: I0212 19:41:17.647067 1560 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:41:17.647475 kubelet[1560]: I0212 19:41:17.647342 1560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:41:17.653492 kubelet[1560]: E0212 19:41:17.653457 1560 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.198.151.132\" not found" Feb 12 19:41:17.653878 kubelet[1560]: E0212 19:41:17.653767 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed556f8e22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 648817698, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 648817698, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.654248 kubelet[1560]: I0212 19:41:17.654228 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:17.656572 kubelet[1560]: E0212 19:41:17.656522 1560 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.198.151.132" Feb 12 19:41:17.657064 kubelet[1560]: E0212 19:41:17.656957 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 654163203, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b2028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.658713 kubelet[1560]: E0212 19:41:17.658613 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 654171435, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b3dbc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.660904 kubelet[1560]: E0212 19:41:17.660799 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 654176289, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b54cf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.685000 audit[1589]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.685000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff4c710cd0 a2=0 a3=7fff4c710cbc items=0 ppid=1560 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 19:41:17.687000 audit[1590]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.687000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe6bf56e90 a2=0 a3=7ffe6bf56e7c items=0 ppid=1560 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:41:17.698000 audit[1593]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.698000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffeb90e2610 a2=0 a3=7ffeb90e25fc items=0 ppid=1560 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:41:17.706000 audit[1596]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.706000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffe78dc880 a2=0 a3=7fffe78dc86c items=0 ppid=1560 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:41:17.708000 audit[1597]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.708000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd126b5d00 a2=0 a3=7ffd126b5cec items=0 ppid=1560 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:41:17.710000 audit[1598]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.710000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd631d420 a2=0 a3=7ffcd631d40c items=0 ppid=1560 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:41:17.715000 audit[1600]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.715000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffca780690 a2=0 a3=7fffca78067c items=0 ppid=1560 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:41:17.719000 audit[1602]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.719000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffaf193330 a2=0 a3=7fffaf19331c items=0 ppid=1560 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:41:17.748000 audit[1605]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.748000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe356ce690 a2=0 a3=7ffe356ce67c items=0 ppid=1560 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:41:17.753000 audit[1607]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.753000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc0bfec5f0 a2=0 a3=7ffc0bfec5dc items=0 ppid=1560 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:41:17.760862 kubelet[1560]: E0212 19:41:17.760827 1560 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "143.198.151.132" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:41:17.769000 audit[1610]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.769000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffcd8235fa0 a2=0 a3=7ffcd8235f8c items=0 ppid=1560 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:41:17.771519 kubelet[1560]: I0212 19:41:17.771479 1560 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:41:17.771000 audit[1611]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=1611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.771000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1cf002d0 a2=0 a3=7ffe1cf002bc items=0 ppid=1560 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.771000 audit[1612]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=1612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.771000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe9eb077a0 a2=0 a3=7ffe9eb0778c items=0 ppid=1560 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:41:17.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:41:17.774000 audit[1613]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.774000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe5f425c40 a2=0 a3=7ffe5f425c2c items=0 ppid=1560 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:41:17.774000 audit[1614]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.774000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9116a170 a2=0 a3=7fff9116a15c items=0 ppid=1560 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:41:17.776000 audit[1616]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:17.776000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff285a0ea0 a2=0 a3=7fff285a0e8c items=0 ppid=1560 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:41:17.778000 audit[1617]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.778000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd057c2c00 a2=0 a3=7ffd057c2bec items=0 ppid=1560 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:41:17.779000 audit[1618]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.779000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe8bbda310 a2=0 a3=7ffe8bbda2fc items=0 ppid=1560 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:41:17.783000 audit[1620]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.783000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffffcf1dff0 a2=0 a3=7ffffcf1dfdc items=0 ppid=1560 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:41:17.785000 audit[1621]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.785000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2803ae90 a2=0 a3=7ffc2803ae7c items=0 ppid=1560 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:41:17.787000 audit[1622]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.787000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2b7e0ec0 a2=0 a3=7ffd2b7e0eac items=0 ppid=1560 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:41:17.791000 audit[1624]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.791000 audit[1624]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc9a812f40 a2=0 a3=7ffc9a812f2c items=0 ppid=1560 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:41:17.795000 audit[1626]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1626 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.795000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc7d9c7020 a2=0 a3=7ffc7d9c700c items=0 ppid=1560 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:41:17.798000 audit[1628]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.798000 audit[1628]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc82d4e6b0 a2=0 a3=7ffc82d4e69c items=0 ppid=1560 pid=1628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:41:17.804000 audit[1630]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.804000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc70c98d50 a2=0 a3=7ffc70c98d3c items=0 ppid=1560 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:41:17.809000 audit[1632]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.809000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fff4698ace0 a2=0 a3=7fff4698accc items=0 ppid=1560 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:41:17.811107 kubelet[1560]: I0212 19:41:17.811082 1560 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:41:17.811386 kubelet[1560]: I0212 19:41:17.811363 1560 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:41:17.811554 kubelet[1560]: I0212 19:41:17.811539 1560 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:41:17.811710 kubelet[1560]: E0212 19:41:17.811693 1560 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:41:17.813000 audit[1633]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.813000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0b492c00 a2=0 a3=7ffe0b492bec items=0 ppid=1560 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:41:17.814127 kubelet[1560]: W0212 19:41:17.814096 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:17.814238 kubelet[1560]: E0212 19:41:17.814140 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:17.815000 audit[1634]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.815000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeefa04d60 a2=0 a3=7ffeefa04d4c items=0 ppid=1560 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:41:17.817000 audit[1635]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:17.817000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea7cdcac0 a2=0 a3=7ffea7cdcaac items=0 ppid=1560 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:17.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:41:17.858196 kubelet[1560]: I0212 19:41:17.858157 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:17.860451 kubelet[1560]: E0212 19:41:17.860398 1560 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.198.151.132" Feb 12 19:41:17.860860 kubelet[1560]: E0212 19:41:17.860744 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 858103158, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b2028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.862935 kubelet[1560]: E0212 19:41:17.862819 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 858111157, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b3dbc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:17.927070 kubelet[1560]: E0212 19:41:17.926935 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 858116634, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b54cf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:18.164022 kubelet[1560]: E0212 19:41:18.163880 1560 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "143.198.151.132" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:41:18.262359 kubelet[1560]: I0212 19:41:18.262322 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:18.264048 kubelet[1560]: E0212 19:41:18.264004 1560 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.198.151.132" Feb 12 19:41:18.264394 kubelet[1560]: E0212 19:41:18.264241 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 18, 261903036, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b2028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:18.326107 kubelet[1560]: E0212 19:41:18.325962 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 18, 261923577, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b3dbc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:18.521611 kubelet[1560]: E0212 19:41:18.521449 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:18.527201 kubelet[1560]: E0212 19:41:18.527080 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 18, 261928273, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b54cf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:18.565919 kubelet[1560]: W0212 19:41:18.565860 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:18.566180 kubelet[1560]: E0212 19:41:18.566156 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:18.631442 kubelet[1560]: W0212 19:41:18.631357 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:18.631442 kubelet[1560]: E0212 19:41:18.631442 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:18.836942 kubelet[1560]: W0212 19:41:18.836765 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:18.836942 kubelet[1560]: E0212 19:41:18.836814 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:18.967915 kubelet[1560]: E0212 19:41:18.967816 1560 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "143.198.151.132" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:41:19.065846 kubelet[1560]: I0212 19:41:19.065808 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:19.067920 kubelet[1560]: E0212 19:41:19.067790 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 19, 65743809, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b2028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:19.068467 kubelet[1560]: E0212 19:41:19.068350 1560 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.198.151.132" Feb 12 19:41:19.070327 kubelet[1560]: E0212 19:41:19.070212 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 19, 65770501, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b3dbc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:19.089180 kubelet[1560]: W0212 19:41:19.089043 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:19.089458 kubelet[1560]: E0212 19:41:19.089407 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:19.126269 kubelet[1560]: E0212 19:41:19.126143 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 19, 65774462, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b54cf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:19.521875 kubelet[1560]: E0212 19:41:19.521688 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:20.281774 kubelet[1560]: W0212 19:41:20.281714 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:20.281774 kubelet[1560]: E0212 19:41:20.281775 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:20.522877 kubelet[1560]: E0212 19:41:20.522821 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:20.570622 kubelet[1560]: E0212 19:41:20.570445 1560 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "143.198.151.132" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:41:20.670249 kubelet[1560]: I0212 19:41:20.670217 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:20.672365 kubelet[1560]: E0212 19:41:20.672326 1560 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.198.151.132" Feb 12 19:41:20.672705 kubelet[1560]: E0212 19:41:20.672296 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 20, 670160841, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b2028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:20.674862 kubelet[1560]: E0212 19:41:20.674765 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 20, 670179236, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b3dbc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:20.676739 kubelet[1560]: E0212 19:41:20.676637 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 20, 670184887, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b54cf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.406803 kubelet[1560]: W0212 19:41:21.406754 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:21.407087 kubelet[1560]: E0212 19:41:21.407064 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:21.524489 kubelet[1560]: E0212 19:41:21.524408 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:21.701245 kubelet[1560]: W0212 19:41:21.701130 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:21.701512 kubelet[1560]: E0212 19:41:21.701490 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:21.712204 kubelet[1560]: W0212 19:41:21.712155 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:21.712421 kubelet[1560]: E0212 19:41:21.712404 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:22.525963 kubelet[1560]: E0212 19:41:22.525867 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:23.526719 kubelet[1560]: E0212 19:41:23.526632 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:23.773139 kubelet[1560]: E0212 19:41:23.773057 1560 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "143.198.151.132" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:41:23.874291 kubelet[1560]: I0212 19:41:23.874253 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:23.876003 kubelet[1560]: E0212 19:41:23.875971 1560 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.198.151.132" Feb 12 19:41:23.876323 kubelet[1560]: E0212 19:41:23.876182 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b2028", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 143.198.151.132 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628604456, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 23, 874202877, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b2028" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:23.877873 kubelet[1560]: E0212 19:41:23.877768 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b3dbc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 143.198.151.132 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628612028, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 23, 874210609, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b3dbc" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:23.879751 kubelet[1560]: E0212 19:41:23.879669 1560 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132.17b334ed543b54cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"143.198.151.132", UID:"143.198.151.132", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 143.198.151.132 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"143.198.151.132"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 17, 628617935, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 23, 874218536, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "143.198.151.132.17b334ed543b54cf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:24.382146 kubelet[1560]: W0212 19:41:24.382096 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:24.382411 kubelet[1560]: E0212 19:41:24.382386 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:24.527598 kubelet[1560]: E0212 19:41:24.527538 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:25.527965 kubelet[1560]: E0212 19:41:25.527883 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:25.728161 kubelet[1560]: W0212 19:41:25.728090 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:25.728161 kubelet[1560]: E0212 19:41:25.728148 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "143.198.151.132" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:26.525969 kubelet[1560]: W0212 19:41:26.525916 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:26.525969 kubelet[1560]: E0212 19:41:26.525965 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:26.529092 kubelet[1560]: E0212 19:41:26.529051 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:27.069200 kubelet[1560]: W0212 19:41:27.069161 1560 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:27.069200 kubelet[1560]: E0212 19:41:27.069201 1560 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:27.507753 kubelet[1560]: I0212 19:41:27.507606 1560 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:41:27.530674 kubelet[1560]: E0212 19:41:27.530614 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:27.653795 kubelet[1560]: E0212 19:41:27.653728 1560 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.198.151.132\" not found" Feb 12 19:41:27.906306 kubelet[1560]: E0212 19:41:27.906251 1560 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "143.198.151.132" not found Feb 12 19:41:28.531354 kubelet[1560]: E0212 19:41:28.531310 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:28.949739 kubelet[1560]: E0212 19:41:28.949678 1560 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "143.198.151.132" not found Feb 12 19:41:29.533052 kubelet[1560]: E0212 19:41:29.532993 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:30.188562 kubelet[1560]: E0212 19:41:30.188504 1560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"143.198.151.132\" not found" node="143.198.151.132" Feb 12 19:41:30.278071 kubelet[1560]: I0212 19:41:30.278041 1560 kubelet_node_status.go:70] "Attempting to register node" node="143.198.151.132" Feb 12 19:41:30.351767 kubelet[1560]: I0212 19:41:30.351724 1560 kubelet_node_status.go:73] "Successfully registered node" node="143.198.151.132" Feb 12 19:41:30.367574 kubelet[1560]: E0212 19:41:30.367527 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:30.468720 kubelet[1560]: E0212 19:41:30.468576 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:30.533562 kubelet[1560]: E0212 19:41:30.533520 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:30.569780 kubelet[1560]: E0212 19:41:30.569718 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:30.580168 sudo[1357]: pam_unix(sudo:session): session closed for user root Feb 12 19:41:30.578000 audit[1357]: USER_END pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:30.581930 kernel: kauditd_printk_skb: 129 callbacks suppressed Feb 12 19:41:30.581970 kernel: audit: type=1106 audit(1707766890.578:202): pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:30.578000 audit[1357]: CRED_DISP pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:30.593249 kernel: audit: type=1104 audit(1707766890.578:203): pid=1357 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:41:30.593533 sshd[1351]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:30.593000 audit[1351]: USER_END pid=1351 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:30.603461 kernel: audit: type=1106 audit(1707766890.593:204): pid=1351 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:30.597269 systemd[1]: sshd@6-143.198.151.132:22-139.178.68.195:37152.service: Deactivated successfully. Feb 12 19:41:30.598509 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:41:30.603988 systemd-logind[1174]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:41:30.593000 audit[1351]: CRED_DISP pid=1351 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:30.610939 systemd-logind[1174]: Removed session 7. Feb 12 19:41:30.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-143.198.151.132:22-139.178.68.195:37152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:30.617407 kernel: audit: type=1104 audit(1707766890.593:205): pid=1351 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 12 19:41:30.617567 kernel: audit: type=1131 audit(1707766890.593:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-143.198.151.132:22-139.178.68.195:37152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:30.670662 kubelet[1560]: E0212 19:41:30.670607 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:30.772781 kubelet[1560]: E0212 19:41:30.771475 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:30.872110 kubelet[1560]: E0212 19:41:30.872053 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:30.973362 kubelet[1560]: E0212 19:41:30.973307 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.074575 kubelet[1560]: E0212 19:41:31.074514 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.175587 kubelet[1560]: E0212 19:41:31.175529 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.276765 kubelet[1560]: E0212 19:41:31.276704 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.378254 kubelet[1560]: E0212 19:41:31.377760 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.478854 kubelet[1560]: E0212 19:41:31.478766 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.534820 kubelet[1560]: E0212 19:41:31.534770 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:31.578962 kubelet[1560]: E0212 19:41:31.578903 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.680201 kubelet[1560]: E0212 19:41:31.679532 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.780560 kubelet[1560]: E0212 19:41:31.780504 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.881674 kubelet[1560]: E0212 19:41:31.881625 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:31.983375 kubelet[1560]: E0212 19:41:31.982655 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.082901 kubelet[1560]: E0212 19:41:32.082846 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.183969 kubelet[1560]: E0212 19:41:32.183913 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.285613 kubelet[1560]: E0212 19:41:32.285011 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.386053 kubelet[1560]: E0212 19:41:32.385998 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.487053 kubelet[1560]: E0212 19:41:32.486963 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.536617 kubelet[1560]: E0212 19:41:32.535859 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:32.588202 kubelet[1560]: E0212 19:41:32.588104 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.688324 kubelet[1560]: E0212 19:41:32.688254 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.789194 kubelet[1560]: E0212 19:41:32.789163 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.890743 kubelet[1560]: E0212 19:41:32.890674 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:32.991816 kubelet[1560]: E0212 19:41:32.991760 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.093708 kubelet[1560]: E0212 19:41:33.093158 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.194210 kubelet[1560]: E0212 19:41:33.194145 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.295499 kubelet[1560]: E0212 19:41:33.295424 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.396965 kubelet[1560]: E0212 19:41:33.396401 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.497506 kubelet[1560]: E0212 19:41:33.497411 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.536453 kubelet[1560]: E0212 19:41:33.536359 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:33.598792 kubelet[1560]: E0212 19:41:33.598742 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.700865 kubelet[1560]: E0212 19:41:33.700203 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.801113 kubelet[1560]: E0212 19:41:33.801015 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:33.901462 kubelet[1560]: E0212 19:41:33.901383 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.002952 kubelet[1560]: E0212 19:41:34.002341 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.103342 kubelet[1560]: E0212 19:41:34.103239 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.203947 kubelet[1560]: E0212 19:41:34.203880 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.305057 kubelet[1560]: E0212 19:41:34.304992 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.406328 kubelet[1560]: E0212 19:41:34.406267 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.507311 kubelet[1560]: E0212 19:41:34.507259 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.537257 kubelet[1560]: E0212 19:41:34.537188 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:34.609036 kubelet[1560]: E0212 19:41:34.608410 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.708949 kubelet[1560]: E0212 19:41:34.708888 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.810000 kubelet[1560]: E0212 19:41:34.809941 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:34.911837 kubelet[1560]: E0212 19:41:34.911308 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.012766 kubelet[1560]: E0212 19:41:35.012691 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.113873 kubelet[1560]: E0212 19:41:35.113817 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.215343 kubelet[1560]: E0212 19:41:35.214874 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.315879 kubelet[1560]: E0212 19:41:35.315789 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.416880 kubelet[1560]: E0212 19:41:35.416796 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.517743 kubelet[1560]: E0212 19:41:35.517125 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.537825 kubelet[1560]: E0212 19:41:35.537736 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:35.617867 kubelet[1560]: E0212 19:41:35.617783 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.718313 kubelet[1560]: E0212 19:41:35.718196 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.819295 kubelet[1560]: E0212 19:41:35.819206 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:35.919561 kubelet[1560]: E0212 19:41:35.919512 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.020595 kubelet[1560]: E0212 19:41:36.020532 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.121940 kubelet[1560]: E0212 19:41:36.121418 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.222420 kubelet[1560]: E0212 19:41:36.222365 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.323382 kubelet[1560]: E0212 19:41:36.323332 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.425099 kubelet[1560]: E0212 19:41:36.424322 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.524632 kubelet[1560]: E0212 19:41:36.524563 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.538239 kubelet[1560]: E0212 19:41:36.538192 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:36.625086 kubelet[1560]: E0212 19:41:36.625013 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.725921 kubelet[1560]: E0212 19:41:36.725209 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.825597 kubelet[1560]: E0212 19:41:36.825537 1560 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"143.198.151.132\" not found" Feb 12 19:41:36.926810 kubelet[1560]: I0212 19:41:36.926774 1560 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:41:36.927648 env[1188]: time="2024-02-12T19:41:36.927553154Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:41:36.928369 kubelet[1560]: I0212 19:41:36.928347 1560 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:41:37.520887 kubelet[1560]: E0212 19:41:37.520822 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:37.532163 kubelet[1560]: I0212 19:41:37.532099 1560 apiserver.go:52] "Watching apiserver" Feb 12 19:41:37.538041 kubelet[1560]: I0212 19:41:37.538002 1560 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:41:37.538258 kubelet[1560]: I0212 19:41:37.538136 1560 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:41:37.538258 kubelet[1560]: I0212 19:41:37.538179 1560 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:41:37.538662 kubelet[1560]: E0212 19:41:37.538636 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:37.539843 kubelet[1560]: E0212 19:41:37.539826 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:37.562255 kubelet[1560]: I0212 19:41:37.562196 1560 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:41:37.583264 kubelet[1560]: I0212 19:41:37.583212 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92522e5a-f925-4cdb-bec2-021a0a726c52-xtables-lock\") pod \"kube-proxy-s2sj7\" (UID: \"92522e5a-f925-4cdb-bec2-021a0a726c52\") " pod="kube-system/kube-proxy-s2sj7" Feb 12 19:41:37.583606 kubelet[1560]: I0212 19:41:37.583582 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92522e5a-f925-4cdb-bec2-021a0a726c52-lib-modules\") pod \"kube-proxy-s2sj7\" (UID: \"92522e5a-f925-4cdb-bec2-021a0a726c52\") " pod="kube-system/kube-proxy-s2sj7" Feb 12 19:41:37.583777 kubelet[1560]: I0212 19:41:37.583758 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfbv\" (UniqueName: \"kubernetes.io/projected/92522e5a-f925-4cdb-bec2-021a0a726c52-kube-api-access-hlfbv\") pod \"kube-proxy-s2sj7\" (UID: \"92522e5a-f925-4cdb-bec2-021a0a726c52\") " pod="kube-system/kube-proxy-s2sj7" Feb 12 19:41:37.583952 kubelet[1560]: I0212 19:41:37.583932 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-tigera-ca-bundle\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.584186 kubelet[1560]: I0212 19:41:37.584167 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-cni-log-dir\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.584342 kubelet[1560]: I0212 19:41:37.584324 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-flexvol-driver-host\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.584518 kubelet[1560]: I0212 19:41:37.584499 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9vw\" (UniqueName: \"kubernetes.io/projected/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-kube-api-access-8t9vw\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.584683 kubelet[1560]: I0212 19:41:37.584663 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/436f3581-5e49-4ebf-b2ed-e5dfb138d87d-kubelet-dir\") pod \"csi-node-driver-x8d4g\" (UID: \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\") " pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:37.584835 kubelet[1560]: I0212 19:41:37.584818 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/436f3581-5e49-4ebf-b2ed-e5dfb138d87d-socket-dir\") pod \"csi-node-driver-x8d4g\" (UID: \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\") " pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:37.585066 kubelet[1560]: I0212 19:41:37.585044 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/436f3581-5e49-4ebf-b2ed-e5dfb138d87d-registration-dir\") pod \"csi-node-driver-x8d4g\" (UID: \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\") " pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:37.585262 kubelet[1560]: I0212 19:41:37.585212 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-lib-modules\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.585420 kubelet[1560]: I0212 19:41:37.585401 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-var-lib-calico\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.585621 kubelet[1560]: I0212 19:41:37.585600 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92522e5a-f925-4cdb-bec2-021a0a726c52-kube-proxy\") pod \"kube-proxy-s2sj7\" (UID: \"92522e5a-f925-4cdb-bec2-021a0a726c52\") " pod="kube-system/kube-proxy-s2sj7" Feb 12 19:41:37.585787 kubelet[1560]: I0212 19:41:37.585767 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-xtables-lock\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.585943 kubelet[1560]: I0212 19:41:37.585923 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-var-run-calico\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.586096 kubelet[1560]: I0212 19:41:37.586078 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-cni-net-dir\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.586269 kubelet[1560]: I0212 19:41:37.586250 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/436f3581-5e49-4ebf-b2ed-e5dfb138d87d-varrun\") pod \"csi-node-driver-x8d4g\" (UID: \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\") " pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:37.586445 kubelet[1560]: I0212 19:41:37.586412 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hmnp\" (UniqueName: \"kubernetes.io/projected/436f3581-5e49-4ebf-b2ed-e5dfb138d87d-kube-api-access-4hmnp\") pod \"csi-node-driver-x8d4g\" (UID: \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\") " pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:37.586615 kubelet[1560]: I0212 19:41:37.586596 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-policysync\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.586795 kubelet[1560]: I0212 19:41:37.586778 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-node-certs\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.586946 kubelet[1560]: I0212 19:41:37.586929 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/62fdcbf7-669c-44e3-a1f0-68b73824b0ea-cni-bin-dir\") pod \"calico-node-6xxqd\" (UID: \"62fdcbf7-669c-44e3-a1f0-68b73824b0ea\") " pod="calico-system/calico-node-6xxqd" Feb 12 19:41:37.587073 kubelet[1560]: I0212 19:41:37.587057 1560 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:41:37.691888 kubelet[1560]: E0212 19:41:37.691836 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.691888 kubelet[1560]: W0212 19:41:37.691870 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.691888 kubelet[1560]: E0212 19:41:37.691904 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.692819 kubelet[1560]: E0212 19:41:37.692777 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.692819 kubelet[1560]: W0212 19:41:37.692809 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.693059 kubelet[1560]: E0212 19:41:37.692835 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.693332 kubelet[1560]: E0212 19:41:37.693306 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.693332 kubelet[1560]: W0212 19:41:37.693328 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.693501 kubelet[1560]: E0212 19:41:37.693350 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.694657 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.706658 kubelet[1560]: W0212 19:41:37.694677 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.694699 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.694949 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.706658 kubelet[1560]: W0212 19:41:37.694961 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.694981 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.695194 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.706658 kubelet[1560]: W0212 19:41:37.695204 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.695220 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.706658 kubelet[1560]: E0212 19:41:37.695373 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707295 kubelet[1560]: W0212 19:41:37.695380 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707295 kubelet[1560]: E0212 19:41:37.695393 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.707295 kubelet[1560]: E0212 19:41:37.695649 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707295 kubelet[1560]: W0212 19:41:37.695659 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707295 kubelet[1560]: E0212 19:41:37.695674 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.707295 kubelet[1560]: E0212 19:41:37.695852 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707295 kubelet[1560]: W0212 19:41:37.695861 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707295 kubelet[1560]: E0212 19:41:37.695877 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.707295 kubelet[1560]: E0212 19:41:37.696078 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707295 kubelet[1560]: W0212 19:41:37.696091 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696111 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696313 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707793 kubelet[1560]: W0212 19:41:37.696324 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696342 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696618 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707793 kubelet[1560]: W0212 19:41:37.696630 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696647 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696881 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.707793 kubelet[1560]: W0212 19:41:37.696892 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.707793 kubelet[1560]: E0212 19:41:37.696913 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.697158 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708229 kubelet[1560]: W0212 19:41:37.697170 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.697186 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.697392 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708229 kubelet[1560]: W0212 19:41:37.697403 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.697418 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.697691 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708229 kubelet[1560]: W0212 19:41:37.697704 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.697720 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708229 kubelet[1560]: E0212 19:41:37.698614 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708779 kubelet[1560]: W0212 19:41:37.698628 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.708779 kubelet[1560]: E0212 19:41:37.698645 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708779 kubelet[1560]: E0212 19:41:37.698862 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708779 kubelet[1560]: W0212 19:41:37.698873 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.708779 kubelet[1560]: E0212 19:41:37.698890 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708779 kubelet[1560]: E0212 19:41:37.699102 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708779 kubelet[1560]: W0212 19:41:37.699120 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.708779 kubelet[1560]: E0212 19:41:37.699140 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.708779 kubelet[1560]: E0212 19:41:37.699343 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.708779 kubelet[1560]: W0212 19:41:37.699354 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.699369 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.699651 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.709225 kubelet[1560]: W0212 19:41:37.699663 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.699681 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.699904 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.709225 kubelet[1560]: W0212 19:41:37.699916 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.699934 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.700133 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.709225 kubelet[1560]: W0212 19:41:37.700145 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709225 kubelet[1560]: E0212 19:41:37.700160 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.700355 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.709741 kubelet[1560]: W0212 19:41:37.700366 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.700381 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.700743 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.709741 kubelet[1560]: W0212 19:41:37.700756 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.700773 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.701025 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.709741 kubelet[1560]: W0212 19:41:37.701037 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.701054 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.709741 kubelet[1560]: E0212 19:41:37.701279 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710159 kubelet[1560]: W0212 19:41:37.701291 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710159 kubelet[1560]: E0212 19:41:37.701313 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.710159 kubelet[1560]: E0212 19:41:37.701554 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710159 kubelet[1560]: W0212 19:41:37.701566 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710159 kubelet[1560]: E0212 19:41:37.701582 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.710159 kubelet[1560]: E0212 19:41:37.701826 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710159 kubelet[1560]: W0212 19:41:37.701837 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710159 kubelet[1560]: E0212 19:41:37.701855 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.710159 kubelet[1560]: E0212 19:41:37.702076 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710159 kubelet[1560]: W0212 19:41:37.702094 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702112 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702318 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710637 kubelet[1560]: W0212 19:41:37.702329 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702345 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702594 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710637 kubelet[1560]: W0212 19:41:37.702605 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702622 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702871 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.710637 kubelet[1560]: W0212 19:41:37.702882 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.710637 kubelet[1560]: E0212 19:41:37.702901 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703106 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711043 kubelet[1560]: W0212 19:41:37.703120 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703141 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703352 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711043 kubelet[1560]: W0212 19:41:37.703364 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703383 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703604 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711043 kubelet[1560]: W0212 19:41:37.703615 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703631 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711043 kubelet[1560]: E0212 19:41:37.703862 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711586 kubelet[1560]: W0212 19:41:37.703874 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.711586 kubelet[1560]: E0212 19:41:37.703891 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711586 kubelet[1560]: E0212 19:41:37.704207 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711586 kubelet[1560]: W0212 19:41:37.704220 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.711586 kubelet[1560]: E0212 19:41:37.704240 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711586 kubelet[1560]: E0212 19:41:37.704528 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711586 kubelet[1560]: W0212 19:41:37.704541 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.711586 kubelet[1560]: E0212 19:41:37.704558 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.711586 kubelet[1560]: E0212 19:41:37.704748 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.711586 kubelet[1560]: W0212 19:41:37.704760 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.712168 kubelet[1560]: E0212 19:41:37.704773 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.712168 kubelet[1560]: E0212 19:41:37.705085 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.712168 kubelet[1560]: W0212 19:41:37.705097 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.712168 kubelet[1560]: E0212 19:41:37.705112 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.712168 kubelet[1560]: E0212 19:41:37.706400 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.712168 kubelet[1560]: W0212 19:41:37.706414 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.712168 kubelet[1560]: E0212 19:41:37.706586 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.790090 kubelet[1560]: E0212 19:41:37.790050 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.790360 kubelet[1560]: W0212 19:41:37.790329 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.790530 kubelet[1560]: E0212 19:41:37.790507 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.790988 kubelet[1560]: E0212 19:41:37.790967 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.791129 kubelet[1560]: W0212 19:41:37.791106 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.791277 kubelet[1560]: E0212 19:41:37.791257 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.791723 kubelet[1560]: E0212 19:41:37.791704 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.791871 kubelet[1560]: W0212 19:41:37.791849 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.791984 kubelet[1560]: E0212 19:41:37.791966 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.893582 kubelet[1560]: E0212 19:41:37.893536 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.893582 kubelet[1560]: W0212 19:41:37.893565 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.893859 kubelet[1560]: E0212 19:41:37.893600 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.893922 kubelet[1560]: E0212 19:41:37.893878 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.893922 kubelet[1560]: W0212 19:41:37.893892 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.893922 kubelet[1560]: E0212 19:41:37.893915 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.894158 kubelet[1560]: E0212 19:41:37.894137 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.894158 kubelet[1560]: W0212 19:41:37.894152 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.894301 kubelet[1560]: E0212 19:41:37.894177 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.956834 kubelet[1560]: E0212 19:41:37.956804 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.956834 kubelet[1560]: W0212 19:41:37.956826 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.957141 kubelet[1560]: E0212 19:41:37.956851 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.995726 kubelet[1560]: E0212 19:41:37.995672 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.995726 kubelet[1560]: W0212 19:41:37.995711 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.996212 kubelet[1560]: E0212 19:41:37.995751 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:37.996212 kubelet[1560]: E0212 19:41:37.996074 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:37.996212 kubelet[1560]: W0212 19:41:37.996087 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:37.996212 kubelet[1560]: E0212 19:41:37.996108 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.097713 kubelet[1560]: E0212 19:41:38.097575 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:38.097968 kubelet[1560]: W0212 19:41:38.097931 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:38.098158 kubelet[1560]: E0212 19:41:38.098138 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.098628 kubelet[1560]: E0212 19:41:38.098604 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:38.098778 kubelet[1560]: W0212 19:41:38.098754 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:38.098892 kubelet[1560]: E0212 19:41:38.098875 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.144773 kubelet[1560]: E0212 19:41:38.144731 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:38.145819 env[1188]: time="2024-02-12T19:41:38.145764176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6xxqd,Uid:62fdcbf7-669c-44e3-a1f0-68b73824b0ea,Namespace:calico-system,Attempt:0,}" Feb 12 19:41:38.157647 kubelet[1560]: E0212 19:41:38.157617 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:38.157871 kubelet[1560]: W0212 19:41:38.157846 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:38.157984 kubelet[1560]: E0212 19:41:38.157968 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.200555 kubelet[1560]: E0212 19:41:38.200408 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:38.200555 kubelet[1560]: W0212 19:41:38.200457 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:38.200555 kubelet[1560]: E0212 19:41:38.200495 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.302024 kubelet[1560]: E0212 19:41:38.301964 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:38.302024 kubelet[1560]: W0212 19:41:38.302002 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:38.302024 kubelet[1560]: E0212 19:41:38.302032 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.354800 kubelet[1560]: E0212 19:41:38.354656 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:38.355049 kubelet[1560]: W0212 19:41:38.355023 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:38.355206 kubelet[1560]: E0212 19:41:38.355186 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:38.444723 kubelet[1560]: E0212 19:41:38.444679 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:38.445972 env[1188]: time="2024-02-12T19:41:38.445890074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2sj7,Uid:92522e5a-f925-4cdb-bec2-021a0a726c52,Namespace:kube-system,Attempt:0,}" Feb 12 19:41:38.540840 kubelet[1560]: E0212 19:41:38.540739 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:38.812512 kubelet[1560]: E0212 19:41:38.812399 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:38.870305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592160950.mount: Deactivated successfully. Feb 12 19:41:38.975091 env[1188]: time="2024-02-12T19:41:38.975034187Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:38.983290 env[1188]: time="2024-02-12T19:41:38.983231965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:38.988180 env[1188]: time="2024-02-12T19:41:38.988110535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:38.995536 env[1188]: time="2024-02-12T19:41:38.995475832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:38.998778 env[1188]: time="2024-02-12T19:41:38.998727825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:39.000784 env[1188]: time="2024-02-12T19:41:39.000731829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:39.003350 env[1188]: time="2024-02-12T19:41:39.003287464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:39.013241 env[1188]: time="2024-02-12T19:41:39.013145328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:39.074900 env[1188]: time="2024-02-12T19:41:39.074679359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:41:39.075103 env[1188]: time="2024-02-12T19:41:39.074758630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:41:39.075103 env[1188]: time="2024-02-12T19:41:39.074779706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:41:39.076524 env[1188]: time="2024-02-12T19:41:39.076446601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7 pid=1713 runtime=io.containerd.runc.v2 Feb 12 19:41:39.087852 env[1188]: time="2024-02-12T19:41:39.087747664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:41:39.088125 env[1188]: time="2024-02-12T19:41:39.088082114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:41:39.088275 env[1188]: time="2024-02-12T19:41:39.088240097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:41:39.088652 env[1188]: time="2024-02-12T19:41:39.088590134Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83479bad61ab231cf84a1e13d6e16be652ac3d9fc9708914385c10bbdbb89463 pid=1721 runtime=io.containerd.runc.v2 Feb 12 19:41:39.176339 env[1188]: time="2024-02-12T19:41:39.176293527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2sj7,Uid:92522e5a-f925-4cdb-bec2-021a0a726c52,Namespace:kube-system,Attempt:0,} returns sandbox id \"83479bad61ab231cf84a1e13d6e16be652ac3d9fc9708914385c10bbdbb89463\"" Feb 12 19:41:39.178753 kubelet[1560]: E0212 19:41:39.178494 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:39.180142 env[1188]: time="2024-02-12T19:41:39.180094970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:41:39.183740 env[1188]: time="2024-02-12T19:41:39.183693480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6xxqd,Uid:62fdcbf7-669c-44e3-a1f0-68b73824b0ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\"" Feb 12 19:41:39.184911 kubelet[1560]: E0212 19:41:39.184710 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:39.541328 kubelet[1560]: E0212 19:41:39.541277 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:40.458360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738460107.mount: Deactivated successfully. Feb 12 19:41:40.541701 kubelet[1560]: E0212 19:41:40.541657 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:40.813090 kubelet[1560]: E0212 19:41:40.813026 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:41.399206 env[1188]: time="2024-02-12T19:41:41.399109166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:41.405549 env[1188]: time="2024-02-12T19:41:41.405467176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:41.413964 env[1188]: time="2024-02-12T19:41:41.413895528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:41.417880 env[1188]: time="2024-02-12T19:41:41.417831637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:41.418791 env[1188]: time="2024-02-12T19:41:41.418746601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:41:41.420798 env[1188]: time="2024-02-12T19:41:41.420757417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 19:41:41.423047 env[1188]: time="2024-02-12T19:41:41.422975773Z" level=info msg="CreateContainer within sandbox \"83479bad61ab231cf84a1e13d6e16be652ac3d9fc9708914385c10bbdbb89463\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:41:41.451258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084617766.mount: Deactivated successfully. Feb 12 19:41:41.463201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22369909.mount: Deactivated successfully. Feb 12 19:41:41.469988 env[1188]: time="2024-02-12T19:41:41.469921120Z" level=info msg="CreateContainer within sandbox \"83479bad61ab231cf84a1e13d6e16be652ac3d9fc9708914385c10bbdbb89463\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eed6fdbdec79f04cf5fd01f1c07c8b19f3e35bcdd60404b3f9c44da29746145c\"" Feb 12 19:41:41.471302 env[1188]: time="2024-02-12T19:41:41.471263957Z" level=info msg="StartContainer for \"eed6fdbdec79f04cf5fd01f1c07c8b19f3e35bcdd60404b3f9c44da29746145c\"" Feb 12 19:41:41.542937 kubelet[1560]: E0212 19:41:41.542799 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:41.597854 env[1188]: time="2024-02-12T19:41:41.597773665Z" level=info msg="StartContainer for \"eed6fdbdec79f04cf5fd01f1c07c8b19f3e35bcdd60404b3f9c44da29746145c\" returns successfully" Feb 12 19:41:41.670000 audit[1841]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.675593 kernel: audit: type=1325 audit(1707766901.670:207): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.670000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd29007260 a2=0 a3=7ffd2900724c items=0 ppid=1801 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.685485 kernel: audit: type=1300 audit(1707766901.670:207): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd29007260 a2=0 a3=7ffd2900724c items=0 ppid=1801 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.685652 kernel: audit: type=1327 audit(1707766901.670:207): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:41:41.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:41:41.677000 audit[1842]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.677000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc4456aa0 a2=0 a3=7ffdc4456a8c items=0 ppid=1801 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.705867 kernel: audit: type=1325 audit(1707766901.677:208): table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.706072 kernel: audit: type=1300 audit(1707766901.677:208): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc4456aa0 a2=0 a3=7ffdc4456a8c items=0 ppid=1801 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:41:41.712553 kernel: audit: type=1327 audit(1707766901.677:208): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:41:41.712757 kernel: audit: type=1325 audit(1707766901.691:209): table=nat:37 family=10 entries=1 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.691000 audit[1844]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.717553 kernel: audit: type=1300 audit(1707766901.691:209): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff04dedf60 a2=0 a3=7fff04dedf4c items=0 ppid=1801 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.691000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff04dedf60 a2=0 a3=7fff04dedf4c items=0 ppid=1801 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:41:41.730720 kernel: audit: type=1327 audit(1707766901.691:209): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:41:41.730930 kernel: audit: type=1325 audit(1707766901.691:210): table=nat:38 family=2 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.691000 audit[1843]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.691000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9087f710 a2=0 a3=7ffe9087f6fc items=0 ppid=1801 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:41:41.698000 audit[1845]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.698000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1a402f80 a2=0 a3=7ffd1a402f6c items=0 ppid=1801 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:41:41.707000 audit[1846]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.707000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd45c75b00 a2=0 a3=7ffd45c75aec items=0 ppid=1801 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:41:41.775000 audit[1847]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.775000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd5e986210 a2=0 a3=7ffd5e9861fc items=0 ppid=1801 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:41:41.779000 audit[1849]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.779000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd636fba50 a2=0 a3=7ffd636fba3c items=0 ppid=1801 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 19:41:41.786000 audit[1852]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.786000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeb3c25a30 a2=0 a3=7ffeb3c25a1c items=0 ppid=1801 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 19:41:41.789000 audit[1853]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.789000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe66ea8310 a2=0 a3=7ffe66ea82fc items=0 ppid=1801 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:41:41.794000 audit[1855]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.794000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffb94e170 a2=0 a3=7ffffb94e15c items=0 ppid=1801 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:41:41.796000 audit[1856]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.796000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1cb5f460 a2=0 a3=7ffe1cb5f44c items=0 ppid=1801 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:41:41.801000 audit[1858]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.801000 audit[1858]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff5294b0e0 a2=0 a3=7fff5294b0cc items=0 ppid=1801 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:41:41.808000 audit[1861]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.808000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc72137c60 a2=0 a3=7ffc72137c4c items=0 ppid=1801 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 19:41:41.811000 audit[1862]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.811000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb34250a0 a2=0 a3=7ffeb342508c items=0 ppid=1801 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:41:41.817000 audit[1864]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.817000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc87c43db0 a2=0 a3=7ffc87c43d9c items=0 ppid=1801 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:41:41.819000 audit[1865]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.819000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe78714b10 a2=0 a3=7ffe78714afc items=0 ppid=1801 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:41:41.824000 audit[1867]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.824000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdef301fb0 a2=0 a3=7ffdef301f9c items=0 ppid=1801 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:41:41.831000 audit[1870]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.831000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb19553a0 a2=0 a3=7ffdb195538c items=0 ppid=1801 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:41:41.839000 audit[1873]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.839000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdefbc8410 a2=0 a3=7ffdefbc83fc items=0 ppid=1801 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:41:41.842000 audit[1874]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.842000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe9c412b10 a2=0 a3=7ffe9c412afc items=0 ppid=1801 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:41:41.847000 audit[1876]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.847000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd30b76f90 a2=0 a3=7ffd30b76f7c items=0 ppid=1801 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:41:41.853000 audit[1879]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:41:41.853000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc414e3e60 a2=0 a3=7ffc414e3e4c items=0 ppid=1801 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:41:41.871000 audit[1883]: NETFILTER_CFG table=filter:58 family=2 entries=4 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:41:41.871000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe636f4ec0 a2=0 a3=7ffe636f4eac items=0 ppid=1801 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:41:41.889834 kubelet[1560]: E0212 19:41:41.889394 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:41.893686 kubelet[1560]: E0212 19:41:41.893646 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.893686 kubelet[1560]: W0212 19:41:41.893675 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.894574 kubelet[1560]: E0212 19:41:41.893706 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.893000 audit[1883]: NETFILTER_CFG table=nat:59 family=2 entries=57 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:41:41.893000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe636f4ec0 a2=0 a3=7ffe636f4eac items=0 ppid=1801 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:41:41.896062 kubelet[1560]: E0212 19:41:41.895536 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.896062 kubelet[1560]: W0212 19:41:41.895556 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.896062 kubelet[1560]: E0212 19:41:41.895585 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.896627 kubelet[1560]: E0212 19:41:41.896380 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.896627 kubelet[1560]: W0212 19:41:41.896399 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.896627 kubelet[1560]: E0212 19:41:41.896437 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.897032 kubelet[1560]: E0212 19:41:41.896887 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.897032 kubelet[1560]: W0212 19:41:41.896904 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.897032 kubelet[1560]: E0212 19:41:41.896926 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.898873 kubelet[1560]: E0212 19:41:41.897410 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.898873 kubelet[1560]: W0212 19:41:41.897936 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.898873 kubelet[1560]: E0212 19:41:41.898001 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.898873 kubelet[1560]: E0212 19:41:41.898334 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.898873 kubelet[1560]: W0212 19:41:41.898347 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.898873 kubelet[1560]: E0212 19:41:41.898387 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.898873 kubelet[1560]: E0212 19:41:41.898807 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.898873 kubelet[1560]: W0212 19:41:41.898818 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.898873 kubelet[1560]: E0212 19:41:41.898834 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.900324 kubelet[1560]: E0212 19:41:41.899583 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.900324 kubelet[1560]: W0212 19:41:41.899611 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.900324 kubelet[1560]: E0212 19:41:41.899629 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.900324 kubelet[1560]: E0212 19:41:41.900071 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.900324 kubelet[1560]: W0212 19:41:41.900082 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.901216 kubelet[1560]: E0212 19:41:41.900462 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.901216 kubelet[1560]: E0212 19:41:41.900973 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.901216 kubelet[1560]: W0212 19:41:41.900983 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.901216 kubelet[1560]: E0212 19:41:41.901017 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.902599 kubelet[1560]: E0212 19:41:41.901785 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.902599 kubelet[1560]: W0212 19:41:41.901797 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.902599 kubelet[1560]: E0212 19:41:41.901813 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.902599 kubelet[1560]: E0212 19:41:41.902307 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.902599 kubelet[1560]: W0212 19:41:41.902317 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.902599 kubelet[1560]: E0212 19:41:41.902352 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903108 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.903740 kubelet[1560]: W0212 19:41:41.903123 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903142 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903324 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.903740 kubelet[1560]: W0212 19:41:41.903332 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903342 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903508 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.903740 kubelet[1560]: W0212 19:41:41.903515 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903524 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.903740 kubelet[1560]: E0212 19:41:41.903655 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.904064 kubelet[1560]: W0212 19:41:41.903686 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.904064 kubelet[1560]: E0212 19:41:41.903698 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.910156 kubelet[1560]: I0212 19:41:41.909713 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s2sj7" podStartSLOduration=-9.223372024945137e+09 pod.CreationTimestamp="2024-02-12 19:41:30 +0000 UTC" firstStartedPulling="2024-02-12 19:41:39.17942334 +0000 UTC m=+22.115748575" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:41:41.90808095 +0000 UTC m=+24.844406205" watchObservedRunningTime="2024-02-12 19:41:41.909638296 +0000 UTC m=+24.845963543" Feb 12 19:41:41.916000 audit[1908]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.916000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4789dfa0 a2=0 a3=7ffe4789df8c items=0 ppid=1801 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:41:41.921000 audit[1910]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.921000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd23fa7fb0 a2=0 a3=7ffd23fa7f9c items=0 ppid=1801 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 19:41:41.928000 audit[1913]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.928000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcef69d0b0 a2=0 a3=7ffcef69d09c items=0 ppid=1801 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 19:41:41.930000 audit[1914]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.930000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd230f2f0 a2=0 a3=7fffd230f2dc items=0 ppid=1801 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.929918 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.934743 kubelet[1560]: W0212 19:41:41.929946 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.929977 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.930302 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.934743 kubelet[1560]: W0212 19:41:41.930316 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.930343 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.930677 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.934743 kubelet[1560]: W0212 19:41:41.930691 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.930718 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.934743 kubelet[1560]: E0212 19:41:41.930945 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.935237 kubelet[1560]: W0212 19:41:41.930968 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.935237 kubelet[1560]: E0212 19:41:41.930994 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.935237 kubelet[1560]: E0212 19:41:41.931254 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.935237 kubelet[1560]: W0212 19:41:41.931268 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.935237 kubelet[1560]: E0212 19:41:41.931301 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.935664 kubelet[1560]: E0212 19:41:41.935636 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.935831 kubelet[1560]: W0212 19:41:41.935795 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.935000 audit[1922]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.935000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcdd41a800 a2=0 a3=7ffcdd41a7ec items=0 ppid=1801 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:41:41.936417 kubelet[1560]: E0212 19:41:41.936393 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.938000 audit[1924]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.938000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca807e3f0 a2=0 a3=7ffca807e3dc items=0 ppid=1801 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:41:41.941700 kubelet[1560]: E0212 19:41:41.941665 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.941906 kubelet[1560]: W0212 19:41:41.941876 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.942050 kubelet[1560]: E0212 19:41:41.942030 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.942605 kubelet[1560]: E0212 19:41:41.942583 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.942782 kubelet[1560]: W0212 19:41:41.942755 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.942917 kubelet[1560]: E0212 19:41:41.942898 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.943356 kubelet[1560]: E0212 19:41:41.943339 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.943515 kubelet[1560]: W0212 19:41:41.943494 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.943630 kubelet[1560]: E0212 19:41:41.943614 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.944534 kubelet[1560]: E0212 19:41:41.944514 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.944702 kubelet[1560]: W0212 19:41:41.944680 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.944838 kubelet[1560]: E0212 19:41:41.944824 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.945698 kubelet[1560]: E0212 19:41:41.945679 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.945837 kubelet[1560]: W0212 19:41:41.945818 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.945968 kubelet[1560]: E0212 19:41:41.945950 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.947000 audit[1930]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.947000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc61678930 a2=0 a3=7ffc6167891c items=0 ppid=1801 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 19:41:41.948505 kubelet[1560]: E0212 19:41:41.948485 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:41.948637 kubelet[1560]: W0212 19:41:41.948614 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:41.948760 kubelet[1560]: E0212 19:41:41.948745 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:41.953000 audit[1934]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.953000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff9d7095b0 a2=0 a3=7fff9d70959c items=0 ppid=1801 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:41:41.954000 audit[1935]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.954000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8d93f360 a2=0 a3=7ffc8d93f34c items=0 ppid=1801 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:41:41.958000 audit[1937]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.958000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffedb3b3eb0 a2=0 a3=7ffedb3b3e9c items=0 ppid=1801 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:41:41.961000 audit[1938]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.961000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf0458f20 a2=0 a3=7ffdf0458f0c items=0 ppid=1801 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:41:41.966000 audit[1940]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.966000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf0b75bf0 a2=0 a3=7ffcf0b75bdc items=0 ppid=1801 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:41:41.974000 audit[1943]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.974000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe112b0980 a2=0 a3=7ffe112b096c items=0 ppid=1801 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:41:41.980000 audit[1946]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.980000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5faa0fe0 a2=0 a3=7ffe5faa0fcc items=0 ppid=1801 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 19:41:41.982000 audit[1947]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.982000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3856cb20 a2=0 a3=7fff3856cb0c items=0 ppid=1801 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:41:41.986000 audit[1949]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.986000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc6793a440 a2=0 a3=7ffc6793a42c items=0 ppid=1801 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:41:41.991000 audit[1952]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:41:41.991000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd85b533b0 a2=0 a3=7ffd85b5339c items=0 ppid=1801 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:41.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:41:42.002000 audit[1956]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:41:42.002000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd8ddcaa30 a2=0 a3=7ffd8ddcaa1c items=0 ppid=1801 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:42.002000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:41:42.003000 audit[1956]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:41:42.003000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffd8ddcaa30 a2=0 a3=7ffd8ddcaa1c items=0 ppid=1801 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:42.003000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:41:42.543530 kubelet[1560]: E0212 19:41:42.543483 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:42.813057 kubelet[1560]: E0212 19:41:42.812902 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:42.890949 kubelet[1560]: E0212 19:41:42.890900 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:42.912129 kubelet[1560]: E0212 19:41:42.912086 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.912371 kubelet[1560]: W0212 19:41:42.912349 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.912616 kubelet[1560]: E0212 19:41:42.912596 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.913020 kubelet[1560]: E0212 19:41:42.912998 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.913176 kubelet[1560]: W0212 19:41:42.913158 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.913369 kubelet[1560]: E0212 19:41:42.913268 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.913811 kubelet[1560]: E0212 19:41:42.913794 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.913940 kubelet[1560]: W0212 19:41:42.913924 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.914044 kubelet[1560]: E0212 19:41:42.914030 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.914501 kubelet[1560]: E0212 19:41:42.914480 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.914628 kubelet[1560]: W0212 19:41:42.914606 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.914735 kubelet[1560]: E0212 19:41:42.914721 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.915072 kubelet[1560]: E0212 19:41:42.915058 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.915193 kubelet[1560]: W0212 19:41:42.915178 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.915293 kubelet[1560]: E0212 19:41:42.915280 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.915641 kubelet[1560]: E0212 19:41:42.915626 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.915746 kubelet[1560]: W0212 19:41:42.915731 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.915862 kubelet[1560]: E0212 19:41:42.915849 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.916222 kubelet[1560]: E0212 19:41:42.916208 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.916343 kubelet[1560]: W0212 19:41:42.916328 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.916475 kubelet[1560]: E0212 19:41:42.916462 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.916813 kubelet[1560]: E0212 19:41:42.916798 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.916920 kubelet[1560]: W0212 19:41:42.916905 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.917015 kubelet[1560]: E0212 19:41:42.917003 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.917475 kubelet[1560]: E0212 19:41:42.917458 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.917607 kubelet[1560]: W0212 19:41:42.917583 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.917716 kubelet[1560]: E0212 19:41:42.917702 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.918152 kubelet[1560]: E0212 19:41:42.918137 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.918257 kubelet[1560]: W0212 19:41:42.918243 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.918355 kubelet[1560]: E0212 19:41:42.918343 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.918704 kubelet[1560]: E0212 19:41:42.918686 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.918836 kubelet[1560]: W0212 19:41:42.918817 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.918936 kubelet[1560]: E0212 19:41:42.918924 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.919244 kubelet[1560]: E0212 19:41:42.919231 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.919342 kubelet[1560]: W0212 19:41:42.919328 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.919454 kubelet[1560]: E0212 19:41:42.919422 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.919772 kubelet[1560]: E0212 19:41:42.919758 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.919884 kubelet[1560]: W0212 19:41:42.919869 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.919976 kubelet[1560]: E0212 19:41:42.919964 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.920285 kubelet[1560]: E0212 19:41:42.920272 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.920381 kubelet[1560]: W0212 19:41:42.920367 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.920519 kubelet[1560]: E0212 19:41:42.920506 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.920864 kubelet[1560]: E0212 19:41:42.920841 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.920997 kubelet[1560]: W0212 19:41:42.920976 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.921093 kubelet[1560]: E0212 19:41:42.921076 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.921476 kubelet[1560]: E0212 19:41:42.921460 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.921602 kubelet[1560]: W0212 19:41:42.921579 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.921699 kubelet[1560]: E0212 19:41:42.921686 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.938159 kubelet[1560]: E0212 19:41:42.938120 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.938159 kubelet[1560]: W0212 19:41:42.938150 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.938497 kubelet[1560]: E0212 19:41:42.938181 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.938575 kubelet[1560]: E0212 19:41:42.938509 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.938575 kubelet[1560]: W0212 19:41:42.938523 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.938575 kubelet[1560]: E0212 19:41:42.938545 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.938895 kubelet[1560]: E0212 19:41:42.938874 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.938895 kubelet[1560]: W0212 19:41:42.938894 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.939043 kubelet[1560]: E0212 19:41:42.938926 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.939187 kubelet[1560]: E0212 19:41:42.939169 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.939253 kubelet[1560]: W0212 19:41:42.939188 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.939253 kubelet[1560]: E0212 19:41:42.939216 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.939473 kubelet[1560]: E0212 19:41:42.939454 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.939473 kubelet[1560]: W0212 19:41:42.939472 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.939609 kubelet[1560]: E0212 19:41:42.939497 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.939767 kubelet[1560]: E0212 19:41:42.939748 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.939767 kubelet[1560]: W0212 19:41:42.939765 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.939885 kubelet[1560]: E0212 19:41:42.939789 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.940300 kubelet[1560]: E0212 19:41:42.940279 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.940477 kubelet[1560]: W0212 19:41:42.940459 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.940593 kubelet[1560]: E0212 19:41:42.940579 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.940899 kubelet[1560]: E0212 19:41:42.940879 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.940975 kubelet[1560]: W0212 19:41:42.940900 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.940975 kubelet[1560]: E0212 19:41:42.940930 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.941207 kubelet[1560]: E0212 19:41:42.941189 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.941207 kubelet[1560]: W0212 19:41:42.941206 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.941403 kubelet[1560]: E0212 19:41:42.941225 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.941584 kubelet[1560]: E0212 19:41:42.941564 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.941648 kubelet[1560]: W0212 19:41:42.941588 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.941648 kubelet[1560]: E0212 19:41:42.941608 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.941883 kubelet[1560]: E0212 19:41:42.941866 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.942000 kubelet[1560]: W0212 19:41:42.941883 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.942000 kubelet[1560]: E0212 19:41:42.941905 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:42.942270 kubelet[1560]: E0212 19:41:42.942256 1560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:41:42.942270 kubelet[1560]: W0212 19:41:42.942267 1560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:41:42.942397 kubelet[1560]: E0212 19:41:42.942281 1560 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:41:43.229162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866367818.mount: Deactivated successfully. Feb 12 19:41:43.544977 kubelet[1560]: E0212 19:41:43.544898 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:44.444324 env[1188]: time="2024-02-12T19:41:44.444238856Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:44.449048 env[1188]: time="2024-02-12T19:41:44.448988749Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:44.453901 env[1188]: time="2024-02-12T19:41:44.453835185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:44.458392 env[1188]: time="2024-02-12T19:41:44.458333299Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:44.460483 env[1188]: time="2024-02-12T19:41:44.460424249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 12 19:41:44.463170 env[1188]: time="2024-02-12T19:41:44.463120808Z" level=info msg="CreateContainer within sandbox \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 19:41:44.489757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619802781.mount: Deactivated successfully. Feb 12 19:41:44.520141 env[1188]: time="2024-02-12T19:41:44.520058181Z" level=info msg="CreateContainer within sandbox \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"712ca3efe007f514c04b72199322b9a87425bf63af8222860372e8622d5abb6a\"" Feb 12 19:41:44.526154 env[1188]: time="2024-02-12T19:41:44.522838091Z" level=info msg="StartContainer for \"712ca3efe007f514c04b72199322b9a87425bf63af8222860372e8622d5abb6a\"" Feb 12 19:41:44.545677 kubelet[1560]: E0212 19:41:44.545620 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:44.702660 env[1188]: time="2024-02-12T19:41:44.702504606Z" level=info msg="StartContainer for \"712ca3efe007f514c04b72199322b9a87425bf63af8222860372e8622d5abb6a\" returns successfully" Feb 12 19:41:44.749299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712ca3efe007f514c04b72199322b9a87425bf63af8222860372e8622d5abb6a-rootfs.mount: Deactivated successfully. Feb 12 19:41:44.812588 kubelet[1560]: E0212 19:41:44.812504 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:44.842233 env[1188]: time="2024-02-12T19:41:44.842152713Z" level=info msg="shim disconnected" id=712ca3efe007f514c04b72199322b9a87425bf63af8222860372e8622d5abb6a Feb 12 19:41:44.842635 env[1188]: time="2024-02-12T19:41:44.842590474Z" level=warning msg="cleaning up after shim disconnected" id=712ca3efe007f514c04b72199322b9a87425bf63af8222860372e8622d5abb6a namespace=k8s.io Feb 12 19:41:44.842795 env[1188]: time="2024-02-12T19:41:44.842770638Z" level=info msg="cleaning up dead shim" Feb 12 19:41:44.859387 env[1188]: time="2024-02-12T19:41:44.859311702Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:41:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2033 runtime=io.containerd.runc.v2\n" Feb 12 19:41:44.896419 kubelet[1560]: E0212 19:41:44.896351 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:44.898176 env[1188]: time="2024-02-12T19:41:44.898133926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 19:41:45.546157 kubelet[1560]: E0212 19:41:45.546085 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:46.479546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238974822.mount: Deactivated successfully. Feb 12 19:41:46.546665 kubelet[1560]: E0212 19:41:46.546574 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:46.812518 kubelet[1560]: E0212 19:41:46.812037 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:47.547344 kubelet[1560]: E0212 19:41:47.547258 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:48.548126 kubelet[1560]: E0212 19:41:48.548059 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:48.812419 kubelet[1560]: E0212 19:41:48.811929 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:49.548937 kubelet[1560]: E0212 19:41:49.548869 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:49.620211 update_engine[1178]: I0212 19:41:49.619546 1178 update_attempter.cc:509] Updating boot flags... Feb 12 19:41:50.549853 kubelet[1560]: E0212 19:41:50.549765 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:50.813586 kubelet[1560]: E0212 19:41:50.812612 1560 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:50.861399 env[1188]: time="2024-02-12T19:41:50.861339767Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:50.868229 env[1188]: time="2024-02-12T19:41:50.868162973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:50.875340 env[1188]: time="2024-02-12T19:41:50.875287652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:50.880644 env[1188]: time="2024-02-12T19:41:50.880584193Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:50.883590 env[1188]: time="2024-02-12T19:41:50.883524453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 12 19:41:50.886590 env[1188]: time="2024-02-12T19:41:50.886530043Z" level=info msg="CreateContainer within sandbox \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:41:50.917264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087953774.mount: Deactivated successfully. Feb 12 19:41:50.928233 env[1188]: time="2024-02-12T19:41:50.928062916Z" level=info msg="CreateContainer within sandbox \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6bce2dd42a36f43adfc00c26b6b3a9f9e846c05fb5217b0a4d80b44126dc690e\"" Feb 12 19:41:50.929467 env[1188]: time="2024-02-12T19:41:50.929403177Z" level=info msg="StartContainer for \"6bce2dd42a36f43adfc00c26b6b3a9f9e846c05fb5217b0a4d80b44126dc690e\"" Feb 12 19:41:51.047656 env[1188]: time="2024-02-12T19:41:51.047574039Z" level=info msg="StartContainer for \"6bce2dd42a36f43adfc00c26b6b3a9f9e846c05fb5217b0a4d80b44126dc690e\" returns successfully" Feb 12 19:41:51.550303 kubelet[1560]: E0212 19:41:51.550208 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:51.672901 env[1188]: time="2024-02-12T19:41:51.672781535Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:41:51.693618 kubelet[1560]: I0212 19:41:51.693557 1560 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:41:51.708515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bce2dd42a36f43adfc00c26b6b3a9f9e846c05fb5217b0a4d80b44126dc690e-rootfs.mount: Deactivated successfully. Feb 12 19:41:51.746810 env[1188]: time="2024-02-12T19:41:51.746743368Z" level=info msg="shim disconnected" id=6bce2dd42a36f43adfc00c26b6b3a9f9e846c05fb5217b0a4d80b44126dc690e Feb 12 19:41:51.747132 env[1188]: time="2024-02-12T19:41:51.747095210Z" level=warning msg="cleaning up after shim disconnected" id=6bce2dd42a36f43adfc00c26b6b3a9f9e846c05fb5217b0a4d80b44126dc690e namespace=k8s.io Feb 12 19:41:51.747253 env[1188]: time="2024-02-12T19:41:51.747229640Z" level=info msg="cleaning up dead shim" Feb 12 19:41:51.760996 env[1188]: time="2024-02-12T19:41:51.760930647Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:41:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2117 runtime=io.containerd.runc.v2\n" Feb 12 19:41:51.911757 kubelet[1560]: E0212 19:41:51.911638 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:51.912884 env[1188]: time="2024-02-12T19:41:51.912846080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 19:41:52.551471 kubelet[1560]: E0212 19:41:52.551351 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:52.815848 env[1188]: time="2024-02-12T19:41:52.815698416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8d4g,Uid:436f3581-5e49-4ebf-b2ed-e5dfb138d87d,Namespace:calico-system,Attempt:0,}" Feb 12 19:41:52.942520 env[1188]: time="2024-02-12T19:41:52.942438832Z" level=error msg="Failed to destroy network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:52.946191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786-shm.mount: Deactivated successfully. Feb 12 19:41:52.947712 env[1188]: time="2024-02-12T19:41:52.947256702Z" level=error msg="encountered an error cleaning up failed sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:52.947712 env[1188]: time="2024-02-12T19:41:52.947359015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8d4g,Uid:436f3581-5e49-4ebf-b2ed-e5dfb138d87d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:52.947863 kubelet[1560]: E0212 19:41:52.947783 1560 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:52.947939 kubelet[1560]: E0212 19:41:52.947879 1560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:52.947939 kubelet[1560]: E0212 19:41:52.947931 1560 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x8d4g" Feb 12 19:41:52.948068 kubelet[1560]: E0212 19:41:52.948003 1560 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x8d4g_calico-system(436f3581-5e49-4ebf-b2ed-e5dfb138d87d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x8d4g_calico-system(436f3581-5e49-4ebf-b2ed-e5dfb138d87d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:53.551835 kubelet[1560]: E0212 19:41:53.551773 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:53.915703 kubelet[1560]: I0212 19:41:53.914944 1560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:41:53.916120 env[1188]: time="2024-02-12T19:41:53.916079724Z" level=info msg="StopPodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\"" Feb 12 19:41:53.972705 env[1188]: time="2024-02-12T19:41:53.972621810Z" level=error msg="StopPodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" failed" error="failed to destroy network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:53.973934 kubelet[1560]: E0212 19:41:53.973628 1560 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:41:53.973934 kubelet[1560]: E0212 19:41:53.973715 1560 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786} Feb 12 19:41:53.973934 kubelet[1560]: E0212 19:41:53.973786 1560 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:41:53.973934 kubelet[1560]: E0212 19:41:53.973883 1560 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"436f3581-5e49-4ebf-b2ed-e5dfb138d87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x8d4g" podUID=436f3581-5e49-4ebf-b2ed-e5dfb138d87d Feb 12 19:41:54.281202 kubelet[1560]: I0212 19:41:54.280361 1560 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:41:54.317075 kubelet[1560]: I0212 19:41:54.317020 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2jl\" (UniqueName: \"kubernetes.io/projected/f32f39f2-3127-4087-aae0-c6d600bae732-kube-api-access-mx2jl\") pod \"nginx-deployment-8ffc5cf85-x8xrh\" (UID: \"f32f39f2-3127-4087-aae0-c6d600bae732\") " pod="default/nginx-deployment-8ffc5cf85-x8xrh" Feb 12 19:41:54.552807 kubelet[1560]: E0212 19:41:54.552742 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:54.592954 env[1188]: time="2024-02-12T19:41:54.592893768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-x8xrh,Uid:f32f39f2-3127-4087-aae0-c6d600bae732,Namespace:default,Attempt:0,}" Feb 12 19:41:54.766421 env[1188]: time="2024-02-12T19:41:54.766336099Z" level=error msg="Failed to destroy network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:54.769906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e-shm.mount: Deactivated successfully. Feb 12 19:41:54.772362 env[1188]: time="2024-02-12T19:41:54.772281040Z" level=error msg="encountered an error cleaning up failed sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:54.772658 env[1188]: time="2024-02-12T19:41:54.772584303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-x8xrh,Uid:f32f39f2-3127-4087-aae0-c6d600bae732,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:54.773547 kubelet[1560]: E0212 19:41:54.773061 1560 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:54.773547 kubelet[1560]: E0212 19:41:54.773135 1560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-x8xrh" Feb 12 19:41:54.773547 kubelet[1560]: E0212 19:41:54.773170 1560 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-x8xrh" Feb 12 19:41:54.773820 kubelet[1560]: E0212 19:41:54.773259 1560 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-x8xrh_default(f32f39f2-3127-4087-aae0-c6d600bae732)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-x8xrh_default(f32f39f2-3127-4087-aae0-c6d600bae732)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-x8xrh" podUID=f32f39f2-3127-4087-aae0-c6d600bae732 Feb 12 19:41:54.922937 kubelet[1560]: I0212 19:41:54.922067 1560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:41:54.923303 env[1188]: time="2024-02-12T19:41:54.923247303Z" level=info msg="StopPodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\"" Feb 12 19:41:54.999756 env[1188]: time="2024-02-12T19:41:54.999652678Z" level=error msg="StopPodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" failed" error="failed to destroy network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:41:55.000641 kubelet[1560]: E0212 19:41:55.000595 1560 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:41:55.000791 kubelet[1560]: E0212 19:41:55.000664 1560 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e} Feb 12 19:41:55.000791 kubelet[1560]: E0212 19:41:55.000744 1560 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f32f39f2-3127-4087-aae0-c6d600bae732\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:41:55.000940 kubelet[1560]: E0212 19:41:55.000794 1560 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f32f39f2-3127-4087-aae0-c6d600bae732\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-x8xrh" podUID=f32f39f2-3127-4087-aae0-c6d600bae732 Feb 12 19:41:55.553592 kubelet[1560]: E0212 19:41:55.553529 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:56.554464 kubelet[1560]: E0212 19:41:56.554364 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:57.520257 kubelet[1560]: E0212 19:41:57.520191 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:57.555102 kubelet[1560]: E0212 19:41:57.555038 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:58.556599 kubelet[1560]: E0212 19:41:58.556529 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:59.557712 kubelet[1560]: E0212 19:41:59.557644 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:59.566519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912403143.mount: Deactivated successfully. Feb 12 19:41:59.665267 env[1188]: time="2024-02-12T19:41:59.665204562Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:59.672183 env[1188]: time="2024-02-12T19:41:59.672117411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:59.676148 env[1188]: time="2024-02-12T19:41:59.676085394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:59.684519 env[1188]: time="2024-02-12T19:41:59.684412779Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:59.685834 env[1188]: time="2024-02-12T19:41:59.685761977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 12 19:41:59.716464 env[1188]: time="2024-02-12T19:41:59.716384317Z" level=info msg="CreateContainer within sandbox \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 19:41:59.747691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809664669.mount: Deactivated successfully. Feb 12 19:41:59.761327 env[1188]: time="2024-02-12T19:41:59.761229225Z" level=info msg="CreateContainer within sandbox \"3ac3ff580e85605177d69a45f31409d2c45a8d51553f189f5bc8531c730b5cf7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684\"" Feb 12 19:41:59.762667 env[1188]: time="2024-02-12T19:41:59.762562816Z" level=info msg="StartContainer for \"8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684\"" Feb 12 19:41:59.865531 env[1188]: time="2024-02-12T19:41:59.865207341Z" level=info msg="StartContainer for \"8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684\" returns successfully" Feb 12 19:41:59.952453 kubelet[1560]: E0212 19:41:59.952394 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:00.001000 audit[2324]: NETFILTER_CFG table=filter:79 family=2 entries=7 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.004255 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 12 19:42:00.004450 kernel: audit: type=1325 audit(1707766920.001:251): table=filter:79 family=2 entries=7 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.001000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe0ea76230 a2=0 a3=7ffe0ea7621c items=0 ppid=1801 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.018667 kernel: audit: type=1300 audit(1707766920.001:251): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe0ea76230 a2=0 a3=7ffe0ea7621c items=0 ppid=1801 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.034514 kernel: audit: type=1327 audit(1707766920.001:251): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.021000 audit[2324]: NETFILTER_CFG table=nat:80 family=2 entries=85 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.021000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffe0ea76230 a2=0 a3=7ffe0ea7621c items=0 ppid=1801 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.050343 kernel: audit: type=1325 audit(1707766920.021:252): table=nat:80 family=2 entries=85 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.050538 kernel: audit: type=1300 audit(1707766920.021:252): arch=c000003e syscall=46 success=yes exit=28484 a0=3 a1=7ffe0ea76230 a2=0 a3=7ffe0ea7621c items=0 ppid=1801 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.052111 kubelet[1560]: I0212 19:42:00.051638 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-6xxqd" podStartSLOduration=-9.2233720068032e+09 pod.CreationTimestamp="2024-02-12 19:41:30 +0000 UTC" firstStartedPulling="2024-02-12 19:41:39.185995077 +0000 UTC m=+22.122320309" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:42:00.042139199 +0000 UTC m=+42.978464451" watchObservedRunningTime="2024-02-12 19:42:00.05157647 +0000 UTC m=+42.987901721" Feb 12 19:42:00.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.058484 kernel: audit: type=1327 audit(1707766920.021:252): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.091457 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 19:42:00.091612 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 19:42:00.131000 audit[2362]: NETFILTER_CFG table=filter:81 family=2 entries=6 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.137481 kernel: audit: type=1325 audit(1707766920.131:253): table=filter:81 family=2 entries=6 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.131000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd72420400 a2=0 a3=7ffd724203ec items=0 ppid=1801 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.151212 kernel: audit: type=1300 audit(1707766920.131:253): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd72420400 a2=0 a3=7ffd724203ec items=0 ppid=1801 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.151404 kernel: audit: type=1327 audit(1707766920.131:253): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.152000 audit[2362]: NETFILTER_CFG table=nat:82 family=2 entries=92 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.152000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffd72420400 a2=0 a3=7ffd724203ec items=0 ppid=1801 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:00.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:00.159463 kernel: audit: type=1325 audit(1707766920.152:254): table=nat:82 family=2 entries=92 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:00.558220 kubelet[1560]: E0212 19:42:00.558125 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:00.956014 kubelet[1560]: E0212 19:42:00.955863 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:00.994368 systemd[1]: run-containerd-runc-k8s.io-8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684-runc.heZMWB.mount: Deactivated successfully. Feb 12 19:42:01.558940 kubelet[1560]: E0212 19:42:01.558879 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:01.959504 kubelet[1560]: E0212 19:42:01.959327 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:01.987324 systemd[1]: run-containerd-runc-k8s.io-8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684-runc.XeWrya.mount: Deactivated successfully. Feb 12 19:42:02.142000 audit[2468]: AVC avc: denied { write } for pid=2468 comm="tee" name="fd" dev="proc" ino=20395 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.142000 audit[2468]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe2af0d977 a2=241 a3=1b6 items=1 ppid=2428 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.142000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 19:42:02.142000 audit: PATH item=0 name="/dev/fd/63" inode=20529 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.165000 audit[2470]: AVC avc: denied { write } for pid=2470 comm="tee" name="fd" dev="proc" ino=20409 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.165000 audit[2470]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec4d4c967 a2=241 a3=1b6 items=1 ppid=2430 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.165000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 19:42:02.165000 audit: PATH item=0 name="/dev/fd/63" inode=20530 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.185000 audit[2472]: AVC avc: denied { write } for pid=2472 comm="tee" name="fd" dev="proc" ino=20549 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.185000 audit[2472]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1da17977 a2=241 a3=1b6 items=1 ppid=2426 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.185000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 19:42:02.185000 audit: PATH item=0 name="/dev/fd/63" inode=20535 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.190000 audit[2481]: AVC avc: denied { write } for pid=2481 comm="tee" name="fd" dev="proc" ino=20553 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.208000 audit[2483]: AVC avc: denied { write } for pid=2483 comm="tee" name="fd" dev="proc" ino=20564 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.212000 audit[2489]: AVC avc: denied { write } for pid=2489 comm="tee" name="fd" dev="proc" ino=20567 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.208000 audit[2483]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff933ff979 a2=241 a3=1b6 items=1 ppid=2436 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.208000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 19:42:02.208000 audit: PATH item=0 name="/dev/fd/63" inode=20545 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.190000 audit[2481]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdfc36a978 a2=241 a3=1b6 items=1 ppid=2437 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.190000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 19:42:02.190000 audit: PATH item=0 name="/dev/fd/63" inode=20544 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.212000 audit[2489]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdd093977 a2=241 a3=1b6 items=1 ppid=2441 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.212000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 19:42:02.212000 audit: PATH item=0 name="/dev/fd/63" inode=20548 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.245000 audit[2504]: AVC avc: denied { write } for pid=2504 comm="tee" name="fd" dev="proc" ino=20573 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:42:02.245000 audit[2504]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa5f82968 a2=241 a3=1b6 items=1 ppid=2455 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.245000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:42:02.245000 audit: PATH item=0 name="/dev/fd/63" inode=20569 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:02.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:42:02.519069 kernel: Initializing XFRM netlink socket Feb 12 19:42:02.560407 kubelet[1560]: E0212 19:42:02.560319 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit: BPF prog-id=10 op=LOAD Feb 12 19:42:02.859000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe85f8bab0 a2=70 a3=7f821d50a000 items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.859000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit: BPF prog-id=11 op=LOAD Feb 12 19:42:02.859000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe85f8bab0 a2=70 a3=6e items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.859000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe85f8ba60 a2=70 a3=7ffe85f8bab0 items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit: BPF prog-id=12 op=LOAD Feb 12 19:42:02.859000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe85f8ba40 a2=70 a3=7ffe85f8bab0 items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.859000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:42:02.859000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.859000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe85f8bb20 a2=70 a3=0 items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.860000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.860000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe85f8bb10 a2=70 a3=0 items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.860000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.860000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.860000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe85f8bb50 a2=70 a3=0 items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.860000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { perfmon } for pid=2571 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit[2571]: AVC avc: denied { bpf } for pid=2571 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.861000 audit: BPF prog-id=13 op=LOAD Feb 12 19:42:02.861000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe85f8ba70 a2=70 a3=ffffffff items=0 ppid=2454 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.861000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:42:02.869000 audit[2575]: AVC avc: denied { bpf } for pid=2575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.869000 audit[2575]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe3ac83120 a2=70 a3=fff80800 items=0 ppid=2454 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:42:02.870000 audit[2575]: AVC avc: denied { bpf } for pid=2575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:42:02.870000 audit[2575]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe3ac82ff0 a2=70 a3=3 items=0 ppid=2454 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:42:02.876000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:42:02.876000 audit[2577]: SYSCALL arch=c000003e syscall=9 success=yes exit=139677593640960 a0=7f09395a2000 a1=1000 a2=3 a3=812 items=0 ppid=2454 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip" exe="/usr/sbin/ip" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.876000 audit: PROCTITLE proctitle=6970006C696E6B0064656C0063616C69636F5F746D705F41 Feb 12 19:42:02.986000 audit[2601]: NETFILTER_CFG table=mangle:83 family=2 entries=19 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:02.986000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffcedf00650 a2=0 a3=7ffcedf0063c items=0 ppid=2454 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.986000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:02.990000 audit[2599]: NETFILTER_CFG table=raw:84 family=2 entries=19 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:02.990000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffee331edd0 a2=0 a3=558671fbc000 items=0 ppid=2454 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.990000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:02.995000 audit[2598]: NETFILTER_CFG table=nat:85 family=2 entries=16 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:02.995000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7fff26de7430 a2=0 a3=55b0f01e4000 items=0 ppid=2454 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:02.995000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:03.000000 audit[2597]: NETFILTER_CFG table=filter:86 family=2 entries=39 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:03.000000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffde0935ca0 a2=0 a3=559db8234000 items=0 ppid=2454 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:03.000000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:03.560868 kubelet[1560]: E0212 19:42:03.560794 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:03.561774 systemd-networkd[1061]: vxlan.calico: Link UP Feb 12 19:42:03.561782 systemd-networkd[1061]: vxlan.calico: Gained carrier Feb 12 19:42:04.562011 kubelet[1560]: E0212 19:42:04.561929 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:05.544835 systemd-networkd[1061]: vxlan.calico: Gained IPv6LL Feb 12 19:42:05.562854 kubelet[1560]: E0212 19:42:05.562791 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:05.814678 env[1188]: time="2024-02-12T19:42:05.814549695Z" level=info msg="StopPodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\"" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:05.980 [INFO][2626] k8s.go 578: Cleaning up netns ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:05.980 [INFO][2626] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" iface="eth0" netns="/var/run/netns/cni-c7317f9f-fb8e-a510-2c73-516723d897f0" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:05.983 [INFO][2626] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" iface="eth0" netns="/var/run/netns/cni-c7317f9f-fb8e-a510-2c73-516723d897f0" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:05.987 [INFO][2626] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" iface="eth0" netns="/var/run/netns/cni-c7317f9f-fb8e-a510-2c73-516723d897f0" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:05.987 [INFO][2626] k8s.go 585: Releasing IP address(es) ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:05.987 [INFO][2626] utils.go 188: Calico CNI releasing IP address ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.131 [INFO][2632] ipam_plugin.go 415: Releasing address using handleID ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.131 [INFO][2632] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.131 [INFO][2632] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.145 [WARNING][2632] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.145 [INFO][2632] ipam_plugin.go 443: Releasing address using workloadID ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.148 [INFO][2632] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:06.152324 env[1188]: 2024-02-12 19:42:06.150 [INFO][2626] k8s.go 591: Teardown processing complete. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:06.159785 env[1188]: time="2024-02-12T19:42:06.155559043Z" level=info msg="TearDown network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" successfully" Feb 12 19:42:06.159785 env[1188]: time="2024-02-12T19:42:06.155619990Z" level=info msg="StopPodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" returns successfully" Feb 12 19:42:06.158387 systemd[1]: run-netns-cni\x2dc7317f9f\x2dfb8e\x2da510\x2d2c73\x2d516723d897f0.mount: Deactivated successfully. Feb 12 19:42:06.168207 env[1188]: time="2024-02-12T19:42:06.168133299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8d4g,Uid:436f3581-5e49-4ebf-b2ed-e5dfb138d87d,Namespace:calico-system,Attempt:1,}" Feb 12 19:42:06.389378 systemd-networkd[1061]: calied88616f721: Link UP Feb 12 19:42:06.391977 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:42:06.392114 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calied88616f721: link becomes ready Feb 12 19:42:06.392647 systemd-networkd[1061]: calied88616f721: Gained carrier Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.254 [INFO][2639] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.132-k8s-csi--node--driver--x8d4g-eth0 csi-node-driver- calico-system 436f3581-5e49-4ebf-b2ed-e5dfb138d87d 1238 0 2024-02-12 19:41:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.151.132 csi-node-driver-x8d4g eth0 default [] [] [kns.calico-system ksa.calico-system.default] calied88616f721 [] []}} ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.254 [INFO][2639] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.306 [INFO][2649] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" HandleID="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.326 [INFO][2649] ipam_plugin.go 268: Auto assigning IP ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" HandleID="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d950), Attrs:map[string]string{"namespace":"calico-system", "node":"143.198.151.132", "pod":"csi-node-driver-x8d4g", "timestamp":"2024-02-12 19:42:06.306741912 +0000 UTC"}, Hostname:"143.198.151.132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.327 [INFO][2649] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.327 [INFO][2649] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.327 [INFO][2649] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.132' Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.331 [INFO][2649] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.342 [INFO][2649] ipam.go 372: Looking up existing affinities for host host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.352 [INFO][2649] ipam.go 489: Trying affinity for 192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.357 [INFO][2649] ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.362 [INFO][2649] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.362 [INFO][2649] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.366 [INFO][2649] ipam.go 1682: Creating new handle: k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3 Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.373 [INFO][2649] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.382 [INFO][2649] ipam.go 1216: Successfully claimed IPs: [192.168.60.193/26] block=192.168.60.192/26 handle="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.382 [INFO][2649] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.193/26] handle="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" host="143.198.151.132" Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.382 [INFO][2649] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:06.424136 env[1188]: 2024-02-12 19:42:06.382 [INFO][2649] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.60.193/26] IPv6=[] ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" HandleID="k8s-pod-network.8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.425354 env[1188]: 2024-02-12 19:42:06.386 [INFO][2639] k8s.go 385: Populated endpoint ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-csi--node--driver--x8d4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"436f3581-5e49-4ebf-b2ed-e5dfb138d87d", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"", Pod:"csi-node-driver-x8d4g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calied88616f721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:06.425354 env[1188]: 2024-02-12 19:42:06.386 [INFO][2639] k8s.go 386: Calico CNI using IPs: [192.168.60.193/32] ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.425354 env[1188]: 2024-02-12 19:42:06.386 [INFO][2639] dataplane_linux.go 68: Setting the host side veth name to calied88616f721 ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.425354 env[1188]: 2024-02-12 19:42:06.391 [INFO][2639] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.425354 env[1188]: 2024-02-12 19:42:06.394 [INFO][2639] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-csi--node--driver--x8d4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"436f3581-5e49-4ebf-b2ed-e5dfb138d87d", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3", Pod:"csi-node-driver-x8d4g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calied88616f721", MAC:"e6:27:a3:27:4e:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:06.425354 env[1188]: 2024-02-12 19:42:06.410 [INFO][2639] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3" Namespace="calico-system" Pod="csi-node-driver-x8d4g" WorkloadEndpoint="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:06.447000 audit[2673]: NETFILTER_CFG table=filter:87 family=2 entries=36 op=nft_register_chain pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:06.450121 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 12 19:42:06.450263 kernel: audit: type=1325 audit(1707766926.447:280): table=filter:87 family=2 entries=36 op=nft_register_chain pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:06.447000 audit[2673]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdc3b62340 a2=0 a3=7ffdc3b6232c items=0 ppid=2454 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:06.455643 env[1188]: time="2024-02-12T19:42:06.455558473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:06.455876 env[1188]: time="2024-02-12T19:42:06.455831433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:06.456027 env[1188]: time="2024-02-12T19:42:06.455996389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:06.456397 env[1188]: time="2024-02-12T19:42:06.456354089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3 pid=2681 runtime=io.containerd.runc.v2 Feb 12 19:42:06.447000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:06.470265 kernel: audit: type=1300 audit(1707766926.447:280): arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdc3b62340 a2=0 a3=7ffdc3b6232c items=0 ppid=2454 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:06.470421 kernel: audit: type=1327 audit(1707766926.447:280): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:06.534166 env[1188]: time="2024-02-12T19:42:06.534110811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x8d4g,Uid:436f3581-5e49-4ebf-b2ed-e5dfb138d87d,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3\"" Feb 12 19:42:06.536514 env[1188]: time="2024-02-12T19:42:06.536475323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 19:42:06.563883 kubelet[1560]: E0212 19:42:06.563822 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:06.813893 env[1188]: time="2024-02-12T19:42:06.813699716Z" level=info msg="StopPodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\"" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.887 [INFO][2729] k8s.go 578: Cleaning up netns ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.887 [INFO][2729] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" iface="eth0" netns="/var/run/netns/cni-40321f6b-5068-cb46-689a-609b02141d28" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.887 [INFO][2729] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" iface="eth0" netns="/var/run/netns/cni-40321f6b-5068-cb46-689a-609b02141d28" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.888 [INFO][2729] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" iface="eth0" netns="/var/run/netns/cni-40321f6b-5068-cb46-689a-609b02141d28" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.888 [INFO][2729] k8s.go 585: Releasing IP address(es) ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.888 [INFO][2729] utils.go 188: Calico CNI releasing IP address ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.934 [INFO][2735] ipam_plugin.go 415: Releasing address using handleID ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.934 [INFO][2735] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.934 [INFO][2735] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.948 [WARNING][2735] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.948 [INFO][2735] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.958 [INFO][2735] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:06.961892 env[1188]: 2024-02-12 19:42:06.959 [INFO][2729] k8s.go 591: Teardown processing complete. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:06.963533 env[1188]: time="2024-02-12T19:42:06.963482186Z" level=info msg="TearDown network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" successfully" Feb 12 19:42:06.963717 env[1188]: time="2024-02-12T19:42:06.963688970Z" level=info msg="StopPodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" returns successfully" Feb 12 19:42:06.964696 env[1188]: time="2024-02-12T19:42:06.964657527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-x8xrh,Uid:f32f39f2-3127-4087-aae0-c6d600bae732,Namespace:default,Attempt:1,}" Feb 12 19:42:07.162052 systemd[1]: run-containerd-runc-k8s.io-8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3-runc.FmT8wm.mount: Deactivated successfully. Feb 12 19:42:07.162278 systemd[1]: run-netns-cni\x2d40321f6b\x2d5068\x2dcb46\x2d689a\x2d609b02141d28.mount: Deactivated successfully. Feb 12 19:42:07.196239 systemd-networkd[1061]: cali849a8797ae5: Link UP Feb 12 19:42:07.199523 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali849a8797ae5: link becomes ready Feb 12 19:42:07.199655 systemd-networkd[1061]: cali849a8797ae5: Gained carrier Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.062 [INFO][2742] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0 nginx-deployment-8ffc5cf85- default f32f39f2-3127-4087-aae0-c6d600bae732 1245 0 2024-02-12 19:41:54 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.151.132 nginx-deployment-8ffc5cf85-x8xrh eth0 default [] [] [kns.default ksa.default.default] cali849a8797ae5 [] []}} ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.062 [INFO][2742] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.105 [INFO][2754] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" HandleID="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.122 [INFO][2754] ipam_plugin.go 268: Auto assigning IP ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" HandleID="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a1d40), Attrs:map[string]string{"namespace":"default", "node":"143.198.151.132", "pod":"nginx-deployment-8ffc5cf85-x8xrh", "timestamp":"2024-02-12 19:42:07.105572462 +0000 UTC"}, Hostname:"143.198.151.132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.122 [INFO][2754] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.122 [INFO][2754] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.122 [INFO][2754] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.132' Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.127 [INFO][2754] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.142 [INFO][2754] ipam.go 372: Looking up existing affinities for host host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.150 [INFO][2754] ipam.go 489: Trying affinity for 192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.154 [INFO][2754] ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.166 [INFO][2754] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.166 [INFO][2754] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.173 [INFO][2754] ipam.go 1682: Creating new handle: k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6 Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.179 [INFO][2754] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.189 [INFO][2754] ipam.go 1216: Successfully claimed IPs: [192.168.60.194/26] block=192.168.60.192/26 handle="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.189 [INFO][2754] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.194/26] handle="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" host="143.198.151.132" Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.190 [INFO][2754] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:07.213534 env[1188]: 2024-02-12 19:42:07.190 [INFO][2754] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.60.194/26] IPv6=[] ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" HandleID="k8s-pod-network.0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.215180 env[1188]: 2024-02-12 19:42:07.192 [INFO][2742] k8s.go 385: Populated endpoint ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f32f39f2-3127-4087-aae0-c6d600bae732", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-x8xrh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali849a8797ae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:07.215180 env[1188]: 2024-02-12 19:42:07.192 [INFO][2742] k8s.go 386: Calico CNI using IPs: [192.168.60.194/32] ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.215180 env[1188]: 2024-02-12 19:42:07.192 [INFO][2742] dataplane_linux.go 68: Setting the host side veth name to cali849a8797ae5 ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.215180 env[1188]: 2024-02-12 19:42:07.200 [INFO][2742] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.215180 env[1188]: 2024-02-12 19:42:07.200 [INFO][2742] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f32f39f2-3127-4087-aae0-c6d600bae732", ResourceVersion:"1245", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6", Pod:"nginx-deployment-8ffc5cf85-x8xrh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali849a8797ae5", MAC:"b2:a1:32:21:4e:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:07.215180 env[1188]: 2024-02-12 19:42:07.211 [INFO][2742] k8s.go 491: Wrote updated endpoint to datastore ContainerID="0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6" Namespace="default" Pod="nginx-deployment-8ffc5cf85-x8xrh" WorkloadEndpoint="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:07.236000 audit[2782]: NETFILTER_CFG table=filter:88 family=2 entries=40 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:07.236000 audit[2782]: SYSCALL arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fffb2d1b440 a2=0 a3=7fffb2d1b42c items=0 ppid=2454 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:07.248407 kernel: audit: type=1325 audit(1707766927.236:281): table=filter:88 family=2 entries=40 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:07.248570 kernel: audit: type=1300 audit(1707766927.236:281): arch=c000003e syscall=46 success=yes exit=21064 a0=3 a1=7fffb2d1b440 a2=0 a3=7fffb2d1b42c items=0 ppid=2454 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:07.249921 env[1188]: time="2024-02-12T19:42:07.249811005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:07.249921 env[1188]: time="2024-02-12T19:42:07.249853773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:07.249921 env[1188]: time="2024-02-12T19:42:07.249865300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:07.257585 kernel: audit: type=1327 audit(1707766927.236:281): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:07.236000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:07.257725 env[1188]: time="2024-02-12T19:42:07.250395492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6 pid=2785 runtime=io.containerd.runc.v2 Feb 12 19:42:07.341708 env[1188]: time="2024-02-12T19:42:07.341650250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-x8xrh,Uid:f32f39f2-3127-4087-aae0-c6d600bae732,Namespace:default,Attempt:1,} returns sandbox id \"0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6\"" Feb 12 19:42:07.564886 kubelet[1560]: E0212 19:42:07.564800 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:07.785776 systemd-networkd[1061]: calied88616f721: Gained IPv6LL Feb 12 19:42:08.183901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836737811.mount: Deactivated successfully. Feb 12 19:42:08.565684 kubelet[1560]: E0212 19:42:08.565630 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:08.630525 env[1188]: time="2024-02-12T19:42:08.630124651Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:08.636525 env[1188]: time="2024-02-12T19:42:08.636419046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:08.639549 env[1188]: time="2024-02-12T19:42:08.639482437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:08.643730 env[1188]: time="2024-02-12T19:42:08.643671390Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:08.645007 env[1188]: time="2024-02-12T19:42:08.644952504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 12 19:42:08.649273 env[1188]: time="2024-02-12T19:42:08.649105282Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:42:08.655758 env[1188]: time="2024-02-12T19:42:08.655696040Z" level=info msg="CreateContainer within sandbox \"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 19:42:08.685753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633526980.mount: Deactivated successfully. Feb 12 19:42:08.699807 env[1188]: time="2024-02-12T19:42:08.699746024Z" level=info msg="CreateContainer within sandbox \"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65b3fc96b764e864f94f3f8ef48fcc4e0c5955fb9e00e874228d8f90177a770d\"" Feb 12 19:42:08.700806 env[1188]: time="2024-02-12T19:42:08.700746779Z" level=info msg="StartContainer for \"65b3fc96b764e864f94f3f8ef48fcc4e0c5955fb9e00e874228d8f90177a770d\"" Feb 12 19:42:08.801652 env[1188]: time="2024-02-12T19:42:08.801586062Z" level=info msg="StartContainer for \"65b3fc96b764e864f94f3f8ef48fcc4e0c5955fb9e00e874228d8f90177a770d\" returns successfully" Feb 12 19:42:09.183939 systemd[1]: run-containerd-runc-k8s.io-65b3fc96b764e864f94f3f8ef48fcc4e0c5955fb9e00e874228d8f90177a770d-runc.RlofRR.mount: Deactivated successfully. Feb 12 19:42:09.193974 systemd-networkd[1061]: cali849a8797ae5: Gained IPv6LL Feb 12 19:42:09.566683 kubelet[1560]: E0212 19:42:09.566612 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:10.567060 kubelet[1560]: E0212 19:42:10.566963 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:11.567213 kubelet[1560]: E0212 19:42:11.567147 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:12.496339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998278644.mount: Deactivated successfully. Feb 12 19:42:12.567853 kubelet[1560]: E0212 19:42:12.567764 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:13.568222 kubelet[1560]: E0212 19:42:13.568159 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:14.052185 env[1188]: time="2024-02-12T19:42:14.052108057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:14.057684 env[1188]: time="2024-02-12T19:42:14.057612304Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:14.062911 env[1188]: time="2024-02-12T19:42:14.062843204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:14.066655 env[1188]: time="2024-02-12T19:42:14.066597332Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:14.067758 env[1188]: time="2024-02-12T19:42:14.067708798Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:42:14.069361 env[1188]: time="2024-02-12T19:42:14.069265238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 19:42:14.071110 env[1188]: time="2024-02-12T19:42:14.071042456Z" level=info msg="CreateContainer within sandbox \"0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:42:14.105763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489239050.mount: Deactivated successfully. Feb 12 19:42:14.121831 env[1188]: time="2024-02-12T19:42:14.121770033Z" level=info msg="CreateContainer within sandbox \"0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b71fa648e7271d583688706042da03f8c21edbb6e94103a8a593a3edf939063d\"" Feb 12 19:42:14.123225 env[1188]: time="2024-02-12T19:42:14.123172133Z" level=info msg="StartContainer for \"b71fa648e7271d583688706042da03f8c21edbb6e94103a8a593a3edf939063d\"" Feb 12 19:42:14.215299 env[1188]: time="2024-02-12T19:42:14.215224676Z" level=info msg="StartContainer for \"b71fa648e7271d583688706042da03f8c21edbb6e94103a8a593a3edf939063d\" returns successfully" Feb 12 19:42:14.569114 kubelet[1560]: E0212 19:42:14.569048 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:15.569328 kubelet[1560]: E0212 19:42:15.569224 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:15.965909 systemd[1]: run-containerd-runc-k8s.io-8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684-runc.I4Mp0M.mount: Deactivated successfully. Feb 12 19:42:16.042169 kubelet[1560]: E0212 19:42:16.042069 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:16.068194 kubelet[1560]: I0212 19:42:16.068019 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-x8xrh" podStartSLOduration=-9.2233720147868e+09 pod.CreationTimestamp="2024-02-12 19:41:54 +0000 UTC" firstStartedPulling="2024-02-12 19:42:07.343060928 +0000 UTC m=+50.279386162" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:42:15.025779767 +0000 UTC m=+57.962105020" watchObservedRunningTime="2024-02-12 19:42:16.067975269 +0000 UTC m=+59.004300519" Feb 12 19:42:16.096747 env[1188]: time="2024-02-12T19:42:16.096661992Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:16.102178 env[1188]: time="2024-02-12T19:42:16.102113684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:16.106021 env[1188]: time="2024-02-12T19:42:16.105961564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:16.109254 env[1188]: time="2024-02-12T19:42:16.109200600Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:16.110135 env[1188]: time="2024-02-12T19:42:16.110076391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 12 19:42:16.113358 env[1188]: time="2024-02-12T19:42:16.113286109Z" level=info msg="CreateContainer within sandbox \"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 19:42:16.154473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695238277.mount: Deactivated successfully. Feb 12 19:42:16.171881 env[1188]: time="2024-02-12T19:42:16.171795280Z" level=info msg="CreateContainer within sandbox \"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f161fdc4b2209e84ab22df4d101f2882f7752ddacb757b147094504ff14fc5bc\"" Feb 12 19:42:16.173031 env[1188]: time="2024-02-12T19:42:16.172976492Z" level=info msg="StartContainer for \"f161fdc4b2209e84ab22df4d101f2882f7752ddacb757b147094504ff14fc5bc\"" Feb 12 19:42:16.262302 env[1188]: time="2024-02-12T19:42:16.261209918Z" level=info msg="StartContainer for \"f161fdc4b2209e84ab22df4d101f2882f7752ddacb757b147094504ff14fc5bc\" returns successfully" Feb 12 19:42:16.570274 kubelet[1560]: E0212 19:42:16.570216 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:16.697833 kubelet[1560]: I0212 19:42:16.697780 1560 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 19:42:16.699333 kubelet[1560]: I0212 19:42:16.699299 1560 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 19:42:17.034994 kubelet[1560]: I0212 19:42:17.034483 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-x8d4g" podStartSLOduration=-9.223371989820353e+09 pod.CreationTimestamp="2024-02-12 19:41:30 +0000 UTC" firstStartedPulling="2024-02-12 19:42:06.536075996 +0000 UTC m=+49.472401241" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:42:17.032479617 +0000 UTC m=+59.968804869" watchObservedRunningTime="2024-02-12 19:42:17.034423498 +0000 UTC m=+59.970748741" Feb 12 19:42:17.520249 kubelet[1560]: E0212 19:42:17.520185 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:17.551785 env[1188]: time="2024-02-12T19:42:17.551227359Z" level=info msg="StopPodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\"" Feb 12 19:42:17.570443 kubelet[1560]: E0212 19:42:17.570359 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.601 [WARNING][3000] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-csi--node--driver--x8d4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"436f3581-5e49-4ebf-b2ed-e5dfb138d87d", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3", Pod:"csi-node-driver-x8d4g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calied88616f721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.601 [INFO][3000] k8s.go 578: Cleaning up netns ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.601 [INFO][3000] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" iface="eth0" netns="" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.601 [INFO][3000] k8s.go 585: Releasing IP address(es) ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.601 [INFO][3000] utils.go 188: Calico CNI releasing IP address ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.639 [INFO][3006] ipam_plugin.go 415: Releasing address using handleID ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.639 [INFO][3006] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.639 [INFO][3006] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.650 [WARNING][3006] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.650 [INFO][3006] ipam_plugin.go 443: Releasing address using workloadID ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.655 [INFO][3006] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:17.658448 env[1188]: 2024-02-12 19:42:17.656 [INFO][3000] k8s.go 591: Teardown processing complete. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.659270 env[1188]: time="2024-02-12T19:42:17.658520320Z" level=info msg="TearDown network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" successfully" Feb 12 19:42:17.659270 env[1188]: time="2024-02-12T19:42:17.658572769Z" level=info msg="StopPodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" returns successfully" Feb 12 19:42:17.659693 env[1188]: time="2024-02-12T19:42:17.659649718Z" level=info msg="RemovePodSandbox for \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\"" Feb 12 19:42:17.660069 env[1188]: time="2024-02-12T19:42:17.659956726Z" level=info msg="Forcibly stopping sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\"" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.726 [WARNING][3024] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-csi--node--driver--x8d4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"436f3581-5e49-4ebf-b2ed-e5dfb138d87d", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"8a361187bccd2d26e021c8f0ccf46b5be3843a6f4fa1026ce264356f4de026f3", Pod:"csi-node-driver-x8d4g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calied88616f721", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.726 [INFO][3024] k8s.go 578: Cleaning up netns ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.726 [INFO][3024] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" iface="eth0" netns="" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.726 [INFO][3024] k8s.go 585: Releasing IP address(es) ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.726 [INFO][3024] utils.go 188: Calico CNI releasing IP address ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.762 [INFO][3030] ipam_plugin.go 415: Releasing address using handleID ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.763 [INFO][3030] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.763 [INFO][3030] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.777 [WARNING][3030] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.778 [INFO][3030] ipam_plugin.go 443: Releasing address using workloadID ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" HandleID="k8s-pod-network.69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Workload="143.198.151.132-k8s-csi--node--driver--x8d4g-eth0" Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.785 [INFO][3030] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:17.789534 env[1188]: 2024-02-12 19:42:17.787 [INFO][3024] k8s.go 591: Teardown processing complete. ContainerID="69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786" Feb 12 19:42:17.789534 env[1188]: time="2024-02-12T19:42:17.788857767Z" level=info msg="TearDown network for sandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" successfully" Feb 12 19:42:17.795950 env[1188]: time="2024-02-12T19:42:17.795877638Z" level=info msg="RemovePodSandbox \"69bffcbc3e32d3a7563e42959456644d6c07cecbd4d2b864e1ca980d12544786\" returns successfully" Feb 12 19:42:17.796861 env[1188]: time="2024-02-12T19:42:17.796801255Z" level=info msg="StopPodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\"" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.846 [WARNING][3049] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f32f39f2-3127-4087-aae0-c6d600bae732", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6", Pod:"nginx-deployment-8ffc5cf85-x8xrh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali849a8797ae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.847 [INFO][3049] k8s.go 578: Cleaning up netns ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.847 [INFO][3049] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" iface="eth0" netns="" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.847 [INFO][3049] k8s.go 585: Releasing IP address(es) ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.847 [INFO][3049] utils.go 188: Calico CNI releasing IP address ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.880 [INFO][3056] ipam_plugin.go 415: Releasing address using handleID ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.880 [INFO][3056] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.880 [INFO][3056] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.893 [WARNING][3056] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.893 [INFO][3056] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.898 [INFO][3056] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:17.900512 env[1188]: 2024-02-12 19:42:17.899 [INFO][3049] k8s.go 591: Teardown processing complete. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:17.901137 env[1188]: time="2024-02-12T19:42:17.900548926Z" level=info msg="TearDown network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" successfully" Feb 12 19:42:17.901137 env[1188]: time="2024-02-12T19:42:17.900586713Z" level=info msg="StopPodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" returns successfully" Feb 12 19:42:17.901559 env[1188]: time="2024-02-12T19:42:17.901519429Z" level=info msg="RemovePodSandbox for \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\"" Feb 12 19:42:17.901651 env[1188]: time="2024-02-12T19:42:17.901571580Z" level=info msg="Forcibly stopping sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\"" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.957 [WARNING][3076] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"f32f39f2-3127-4087-aae0-c6d600bae732", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"0aebe43e38fb5e17f53a22218e7b03d2fe6cec6fd7dfa4a758fdf6b35e06f4b6", Pod:"nginx-deployment-8ffc5cf85-x8xrh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali849a8797ae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.957 [INFO][3076] k8s.go 578: Cleaning up netns ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.957 [INFO][3076] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" iface="eth0" netns="" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.957 [INFO][3076] k8s.go 585: Releasing IP address(es) ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.957 [INFO][3076] utils.go 188: Calico CNI releasing IP address ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.987 [INFO][3083] ipam_plugin.go 415: Releasing address using handleID ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.987 [INFO][3083] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:17.987 [INFO][3083] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:18.002 [WARNING][3083] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:18.002 [INFO][3083] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" HandleID="k8s-pod-network.4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Workload="143.198.151.132-k8s-nginx--deployment--8ffc5cf85--x8xrh-eth0" Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:18.005 [INFO][3083] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:18.008568 env[1188]: 2024-02-12 19:42:18.007 [INFO][3076] k8s.go 591: Teardown processing complete. ContainerID="4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e" Feb 12 19:42:18.009561 env[1188]: time="2024-02-12T19:42:18.008603460Z" level=info msg="TearDown network for sandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" successfully" Feb 12 19:42:18.015014 env[1188]: time="2024-02-12T19:42:18.014954275Z" level=info msg="RemovePodSandbox \"4ad374e2aa9c13a859b8719c31fe8f9dcff13a9614fc49f84fceaec73e05cd6e\" returns successfully" Feb 12 19:42:18.571052 kubelet[1560]: E0212 19:42:18.570988 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:19.571850 kubelet[1560]: E0212 19:42:19.571791 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:20.572525 kubelet[1560]: E0212 19:42:20.572476 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:20.774000 audit[3115]: NETFILTER_CFG table=filter:89 family=2 entries=18 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.774000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe4db3de40 a2=0 a3=7ffe4db3de2c items=0 ppid=1801 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.788978 kernel: audit: type=1325 audit(1707766940.774:282): table=filter:89 family=2 entries=18 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.789199 kernel: audit: type=1300 audit(1707766940.774:282): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe4db3de40 a2=0 a3=7ffe4db3de2c items=0 ppid=1801 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.789250 kernel: audit: type=1327 audit(1707766940.774:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.782000 audit[3115]: NETFILTER_CFG table=nat:90 family=2 entries=94 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.782000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe4db3de40 a2=0 a3=7ffe4db3de2c items=0 ppid=1801 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.811322 kernel: audit: type=1325 audit(1707766940.782:283): table=nat:90 family=2 entries=94 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.811530 kernel: audit: type=1300 audit(1707766940.782:283): arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe4db3de40 a2=0 a3=7ffe4db3de2c items=0 ppid=1801 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.817485 kernel: audit: type=1327 audit(1707766940.782:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.857827 kubelet[1560]: I0212 19:42:20.857659 1560 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:42:20.900000 audit[3141]: NETFILTER_CFG table=filter:91 family=2 entries=30 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.900000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffc629ec9e0 a2=0 a3=7ffc629ec9cc items=0 ppid=1801 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.916614 kernel: audit: type=1325 audit(1707766940.900:284): table=filter:91 family=2 entries=30 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.916814 kernel: audit: type=1300 audit(1707766940.900:284): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffc629ec9e0 a2=0 a3=7ffc629ec9cc items=0 ppid=1801 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.916914 kernel: audit: type=1327 audit(1707766940.900:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.904000 audit[3141]: NETFILTER_CFG table=nat:92 family=2 entries=94 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.904000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffc629ec9e0 a2=0 a3=7ffc629ec9cc items=0 ppid=1801 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:20.928510 kernel: audit: type=1325 audit(1707766940.904:285): table=nat:92 family=2 entries=94 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:20.960681 kubelet[1560]: I0212 19:42:20.960511 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/eadcf21e-d2d5-41c3-aa77-a7845e7f2c69-data\") pod \"nfs-server-provisioner-0\" (UID: \"eadcf21e-d2d5-41c3-aa77-a7845e7f2c69\") " pod="default/nfs-server-provisioner-0" Feb 12 19:42:20.961171 kubelet[1560]: I0212 19:42:20.961123 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmxw\" (UniqueName: \"kubernetes.io/projected/eadcf21e-d2d5-41c3-aa77-a7845e7f2c69-kube-api-access-spmxw\") pod \"nfs-server-provisioner-0\" (UID: \"eadcf21e-d2d5-41c3-aa77-a7845e7f2c69\") " pod="default/nfs-server-provisioner-0" Feb 12 19:42:21.166841 env[1188]: time="2024-02-12T19:42:21.166146349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:eadcf21e-d2d5-41c3-aa77-a7845e7f2c69,Namespace:default,Attempt:0,}" Feb 12 19:42:21.410502 systemd-networkd[1061]: cali60e51b789ff: Link UP Feb 12 19:42:21.414629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:42:21.414996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 12 19:42:21.415314 systemd-networkd[1061]: cali60e51b789ff: Gained carrier Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.263 [INFO][3144] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.132-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default eadcf21e-d2d5-41c3-aa77-a7845e7f2c69 1321 0 2024-02-12 19:42:20 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 143.198.151.132 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.265 [INFO][3144] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.308 [INFO][3155] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" HandleID="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Workload="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.335 [INFO][3155] ipam_plugin.go 268: Auto assigning IP ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" HandleID="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Workload="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d020), Attrs:map[string]string{"namespace":"default", "node":"143.198.151.132", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-12 19:42:21.308803195 +0000 UTC"}, Hostname:"143.198.151.132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.335 [INFO][3155] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.335 [INFO][3155] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.335 [INFO][3155] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.132' Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.341 [INFO][3155] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.353 [INFO][3155] ipam.go 372: Looking up existing affinities for host host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.363 [INFO][3155] ipam.go 489: Trying affinity for 192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.370 [INFO][3155] ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.377 [INFO][3155] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.377 [INFO][3155] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.381 [INFO][3155] ipam.go 1682: Creating new handle: k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69 Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.392 [INFO][3155] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.405 [INFO][3155] ipam.go 1216: Successfully claimed IPs: [192.168.60.195/26] block=192.168.60.192/26 handle="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.405 [INFO][3155] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.195/26] handle="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" host="143.198.151.132" Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.405 [INFO][3155] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:21.435885 env[1188]: 2024-02-12 19:42:21.405 [INFO][3155] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.60.195/26] IPv6=[] ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" HandleID="k8s-pod-network.b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Workload="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.437393 env[1188]: 2024-02-12 19:42:21.407 [INFO][3144] k8s.go 385: Populated endpoint ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"eadcf21e-d2d5-41c3-aa77-a7845e7f2c69", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:21.437393 env[1188]: 2024-02-12 19:42:21.407 [INFO][3144] k8s.go 386: Calico CNI using IPs: [192.168.60.195/32] ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.437393 env[1188]: 2024-02-12 19:42:21.407 [INFO][3144] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.437393 env[1188]: 2024-02-12 19:42:21.415 [INFO][3144] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.437822 env[1188]: 2024-02-12 19:42:21.418 [INFO][3144] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"eadcf21e-d2d5-41c3-aa77-a7845e7f2c69", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e6:0d:d3:7d:3b:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:21.437822 env[1188]: 2024-02-12 19:42:21.433 [INFO][3144] k8s.go 491: Wrote updated endpoint to datastore ContainerID="b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.132-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:42:21.474918 env[1188]: time="2024-02-12T19:42:21.474804221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:21.474918 env[1188]: time="2024-02-12T19:42:21.474868467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:21.475307 env[1188]: time="2024-02-12T19:42:21.474887013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:21.475631 env[1188]: time="2024-02-12T19:42:21.475529312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69 pid=3186 runtime=io.containerd.runc.v2 Feb 12 19:42:21.491000 audit[3201]: NETFILTER_CFG table=filter:93 family=2 entries=38 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:21.491000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=19500 a0=3 a1=7ffe925fbe90 a2=0 a3=7ffe925fbe7c items=0 ppid=2454 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:21.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:21.574316 kubelet[1560]: E0212 19:42:21.574256 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:21.576967 env[1188]: time="2024-02-12T19:42:21.576908449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:eadcf21e-d2d5-41c3-aa77-a7845e7f2c69,Namespace:default,Attempt:0,} returns sandbox id \"b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69\"" Feb 12 19:42:21.579609 env[1188]: time="2024-02-12T19:42:21.579570971Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:42:22.575308 kubelet[1560]: E0212 19:42:22.575239 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:22.760810 systemd-networkd[1061]: cali60e51b789ff: Gained IPv6LL Feb 12 19:42:23.576212 kubelet[1560]: E0212 19:42:23.576144 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:24.577503 kubelet[1560]: E0212 19:42:24.577377 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:25.009169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834359348.mount: Deactivated successfully. Feb 12 19:42:25.578573 kubelet[1560]: E0212 19:42:25.578482 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:26.578696 kubelet[1560]: E0212 19:42:26.578620 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:27.579405 kubelet[1560]: E0212 19:42:27.579343 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:28.119246 env[1188]: time="2024-02-12T19:42:28.119158781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:28.126527 env[1188]: time="2024-02-12T19:42:28.125726345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:28.131161 env[1188]: time="2024-02-12T19:42:28.131097175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:28.136631 env[1188]: time="2024-02-12T19:42:28.136572277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:28.138219 env[1188]: time="2024-02-12T19:42:28.138126350Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 19:42:28.141850 env[1188]: time="2024-02-12T19:42:28.141777658Z" level=info msg="CreateContainer within sandbox \"b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:42:28.163013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1220118995.mount: Deactivated successfully. Feb 12 19:42:28.181152 env[1188]: time="2024-02-12T19:42:28.181039335Z" level=info msg="CreateContainer within sandbox \"b57c33aad4d3f5beda60cf57b0783fbc7d3a51c87fe5d7b6062d2ecabc736e69\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"382e7de60c7ab6c444405874a22d1bd901b135d0d659b9bb3c9b34ab31af1013\"" Feb 12 19:42:28.182629 env[1188]: time="2024-02-12T19:42:28.182567945Z" level=info msg="StartContainer for \"382e7de60c7ab6c444405874a22d1bd901b135d0d659b9bb3c9b34ab31af1013\"" Feb 12 19:42:28.272014 env[1188]: time="2024-02-12T19:42:28.271958818Z" level=info msg="StartContainer for \"382e7de60c7ab6c444405874a22d1bd901b135d0d659b9bb3c9b34ab31af1013\" returns successfully" Feb 12 19:42:28.580180 kubelet[1560]: E0212 19:42:28.580048 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:29.063725 kubelet[1560]: I0212 19:42:29.063679 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372027791157e+09 pod.CreationTimestamp="2024-02-12 19:42:20 +0000 UTC" firstStartedPulling="2024-02-12 19:42:21.578756001 +0000 UTC m=+64.515081234" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:42:29.063137434 +0000 UTC m=+71.999462687" watchObservedRunningTime="2024-02-12 19:42:29.063618783 +0000 UTC m=+71.999944032" Feb 12 19:42:29.129000 audit[3325]: NETFILTER_CFG table=filter:94 family=2 entries=18 op=nft_register_rule pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:29.132301 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 19:42:29.132414 kernel: audit: type=1325 audit(1707766949.129:287): table=filter:94 family=2 entries=18 op=nft_register_rule pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:29.129000 audit[3325]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdde3a1f10 a2=0 a3=7ffdde3a1efc items=0 ppid=1801 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:29.142521 kernel: audit: type=1300 audit(1707766949.129:287): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdde3a1f10 a2=0 a3=7ffdde3a1efc items=0 ppid=1801 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:29.142675 kernel: audit: type=1327 audit(1707766949.129:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:29.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:29.144000 audit[3325]: NETFILTER_CFG table=nat:95 family=2 entries=178 op=nft_register_chain pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:29.144000 audit[3325]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffdde3a1f10 a2=0 a3=7ffdde3a1efc items=0 ppid=1801 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:29.158959 kernel: audit: type=1325 audit(1707766949.144:288): table=nat:95 family=2 entries=178 op=nft_register_chain pid=3325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:42:29.159120 kernel: audit: type=1300 audit(1707766949.144:288): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffdde3a1f10 a2=0 a3=7ffdde3a1efc items=0 ppid=1801 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:29.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:29.162971 kernel: audit: type=1327 audit(1707766949.144:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:42:29.580600 kubelet[1560]: E0212 19:42:29.580528 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:30.581725 kubelet[1560]: E0212 19:42:30.581588 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:31.583400 kubelet[1560]: E0212 19:42:31.583350 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:32.585246 kubelet[1560]: E0212 19:42:32.585184 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:33.586924 kubelet[1560]: E0212 19:42:33.586847 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:34.587863 kubelet[1560]: E0212 19:42:34.587765 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:35.589345 kubelet[1560]: E0212 19:42:35.589286 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:36.590363 kubelet[1560]: E0212 19:42:36.590292 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:37.520927 kubelet[1560]: E0212 19:42:37.520802 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:37.591814 kubelet[1560]: E0212 19:42:37.591763 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:38.039566 kubelet[1560]: I0212 19:42:38.039511 1560 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:42:38.081786 kubelet[1560]: I0212 19:42:38.081734 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-152608c7-4715-48e1-ad17-c95e58cc5d50\" (UniqueName: \"kubernetes.io/nfs/cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe-pvc-152608c7-4715-48e1-ad17-c95e58cc5d50\") pod \"test-pod-1\" (UID: \"cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe\") " pod="default/test-pod-1" Feb 12 19:42:38.081786 kubelet[1560]: I0212 19:42:38.081805 1560 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwpm\" (UniqueName: \"kubernetes.io/projected/cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe-kube-api-access-hmwpm\") pod \"test-pod-1\" (UID: \"cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe\") " pod="default/test-pod-1" Feb 12 19:42:38.238858 kernel: Failed to create system directory netfs Feb 12 19:42:38.239047 kernel: audit: type=1400 audit(1707766958.229:289): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.239100 kernel: Failed to create system directory netfs Feb 12 19:42:38.229000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.259913 kernel: audit: type=1400 audit(1707766958.229:289): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.260083 kernel: Failed to create system directory netfs Feb 12 19:42:38.229000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.267722 kernel: audit: type=1400 audit(1707766958.229:289): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.267904 kernel: Failed to create system directory netfs Feb 12 19:42:38.229000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.274790 kernel: audit: type=1400 audit(1707766958.229:289): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.229000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.229000 audit[3334]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557147e6c5e0 a1=153bc a2=55714631d2b0 a3=5 items=0 ppid=50 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:38.288549 kernel: audit: type=1300 audit(1707766958.229:289): arch=c000003e syscall=175 success=yes exit=0 a0=557147e6c5e0 a1=153bc a2=55714631d2b0 a3=5 items=0 ppid=50 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:38.229000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:42:38.293443 kernel: audit: type=1327 audit(1707766958.229:289): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:42:38.305081 kernel: Failed to create system directory fscache Feb 12 19:42:38.305261 kernel: audit: type=1400 audit(1707766958.293:290): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.305314 kernel: Failed to create system directory fscache Feb 12 19:42:38.305346 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.308161 kernel: audit: type=1400 audit(1707766958.293:290): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.308321 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.315283 kernel: audit: type=1400 audit(1707766958.293:290): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.315466 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.322235 kernel: audit: type=1400 audit(1707766958.293:290): avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.322404 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.329588 kernel: Failed to create system directory fscache Feb 12 19:42:38.329710 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.332491 kernel: Failed to create system directory fscache Feb 12 19:42:38.332577 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.335472 kernel: Failed to create system directory fscache Feb 12 19:42:38.335631 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.338482 kernel: Failed to create system directory fscache Feb 12 19:42:38.338577 kernel: Failed to create system directory fscache Feb 12 19:42:38.293000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.342584 kernel: FS-Cache: Loaded Feb 12 19:42:38.293000 audit[3334]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5571480819c0 a1=4c0fc a2=55714631d2b0 a3=5 items=0 ppid=50 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:38.293000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.390256 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.390408 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.390466 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.393345 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.393496 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.394777 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.396204 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.399177 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.399289 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.402101 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.402199 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.403663 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.406493 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.406630 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.407968 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.409113 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.410316 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.411401 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.412691 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.413739 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.414838 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.416336 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.417873 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.419375 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.422391 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.422537 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.423868 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.425277 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.426504 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.428879 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.428958 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.431285 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.431365 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.435198 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.435299 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.435342 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.437729 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.437817 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.440330 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.440460 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.442909 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.443039 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.445969 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.446089 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.448722 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.448848 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.451138 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.451314 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.453491 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.453584 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.455819 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.455943 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.458192 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.458285 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.460557 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.460640 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.463002 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.463124 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.463986 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.465229 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.467685 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.467802 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.470504 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.470622 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.471534 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.474028 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.474117 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.476469 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.476603 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.478701 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.478786 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.481022 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.481111 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.483479 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.483645 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.484542 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.485793 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.488778 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.489000 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.491782 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.491889 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.494397 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.494583 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.495485 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.496775 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.497922 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.500542 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.500707 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.502007 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.505041 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.505134 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.506465 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.509197 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.509308 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.510391 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.511563 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.512968 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.515517 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.515704 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.518376 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.518521 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.521091 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.521186 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.522545 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.523761 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.524904 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.526157 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.527257 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.528472 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.529797 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.530997 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.532277 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.533364 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.534590 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.535860 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.537152 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.540080 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.540195 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.541446 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.543038 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.551147 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.552706 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.552900 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.552974 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.553040 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.553097 kernel: Failed to create system directory sunrpc Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.369000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.562198 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:42:38.562374 kernel: RPC: Registered udp transport module. Feb 12 19:42:38.562418 kernel: RPC: Registered tcp transport module. Feb 12 19:42:38.563375 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:42:38.369000 audit[3334]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5571480cdad0 a1=1588c4 a2=55714631d2b0 a3=5 items=6 ppid=50 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:38.369000 audit: CWD cwd="/" Feb 12 19:42:38.369000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:38.369000 audit: PATH item=1 name=(null) inode=24410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:38.369000 audit: PATH item=2 name=(null) inode=24410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:38.369000 audit: PATH item=3 name=(null) inode=24411 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:38.369000 audit: PATH item=4 name=(null) inode=24410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:38.369000 audit: PATH item=5 name=(null) inode=24412 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:38.369000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:42:38.592304 kubelet[1560]: E0212 19:42:38.592229 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.620026 kernel: Failed to create system directory nfs Feb 12 19:42:38.620177 kernel: Failed to create system directory nfs Feb 12 19:42:38.620216 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.621057 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.622116 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.623195 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.625563 kernel: Failed to create system directory nfs Feb 12 19:42:38.625641 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.626882 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.627956 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.629053 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.630122 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.631190 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.632275 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.633361 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.635592 kernel: Failed to create system directory nfs Feb 12 19:42:38.635689 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.636711 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.637804 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.638848 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.639974 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.641064 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.642143 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.643242 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.644816 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.647670 kernel: Failed to create system directory nfs Feb 12 19:42:38.647757 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.648830 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.650253 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.651824 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.653063 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.654383 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.655775 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.658153 kernel: Failed to create system directory nfs Feb 12 19:42:38.658276 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.659327 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.660681 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.662085 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.663399 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.664743 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.665875 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.667222 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.668475 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.669658 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.670763 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.671963 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.673203 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.674374 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.675597 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.676809 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.677886 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.678955 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.680069 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.681130 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.682310 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.683294 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.684370 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.685479 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.686526 kernel: Failed to create system directory nfs Feb 12 19:42:38.594000 audit[3334]: AVC avc: denied { confidentiality } for pid=3334 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.700944 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:42:38.594000 audit[3334]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557148270680 a1=e29dc a2=55714631d2b0 a3=5 items=0 ppid=50 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:38.594000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.751381 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.751600 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.751660 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.752495 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.753622 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.754691 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.756068 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.758865 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.758956 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.761538 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.761624 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.762991 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.764235 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.765666 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.767041 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.768571 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.769717 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.770846 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.773157 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.773275 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.774313 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.775395 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.776530 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.777786 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.779109 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.780457 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.781602 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.782927 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.784143 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.785340 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.786558 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.787812 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.788992 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.790181 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.791321 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.792621 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.793717 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.794984 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.796164 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.797405 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.798651 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.799976 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.801356 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.808290 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.808551 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.808617 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.808648 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.808704 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.808921 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.810092 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.812555 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.812642 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.813764 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.815028 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.816179 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.817528 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.819996 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.820079 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.821061 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.822234 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.823680 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.825913 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.826003 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.827120 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.828607 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.829849 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.831121 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.832237 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.833414 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.834536 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.836822 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.836902 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.837960 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.839224 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.840611 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.842950 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.843039 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.844056 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.845179 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.846277 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.847406 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.848535 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.849628 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.850736 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.851945 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.853093 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.854201 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.855324 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.856437 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.857635 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.860175 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.860305 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:38.862603 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.862687 kernel: Failed to create system directory nfs4 Feb 12 19:42:38.731000 audit[3339]: AVC avc: denied { confidentiality } for pid=3339 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.019963 kernel: NFS: Registering the id_resolver key type Feb 12 19:42:39.020175 kernel: Key type id_resolver registered Feb 12 19:42:39.020215 kernel: Key type id_legacy registered Feb 12 19:42:38.731000 audit[3339]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fb5a073a010 a1=1d3cc4 a2=557717cf02b0 a3=5 items=0 ppid=50 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:38.731000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.035814 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.035921 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.035982 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.036890 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.038001 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.039130 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.040283 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.041414 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.042549 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.043707 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.044828 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.045976 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.047103 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.048257 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.049508 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.051779 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.051861 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.052861 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.054053 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.055194 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.056326 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.057507 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.058636 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.065621 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.076005 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.076401 kernel: Failed to create system directory rpcgss Feb 12 19:42:39.029000 audit[3340]: AVC avc: denied { confidentiality } for pid=3340 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:42:39.029000 audit[3340]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fad4c8d2010 a1=4f524 a2=560ac7dce2b0 a3=5 items=0 ppid=50 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:39.029000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 12 19:42:39.592957 kubelet[1560]: E0212 19:42:39.592880 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:40.593425 kubelet[1560]: E0212 19:42:40.593355 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:41.593592 kubelet[1560]: E0212 19:42:41.593546 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:42.594105 kubelet[1560]: E0212 19:42:42.594031 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:43.594341 kubelet[1560]: E0212 19:42:43.594258 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:44.594851 kubelet[1560]: E0212 19:42:44.594711 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:45.235713 nfsidmap[3347]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-d-fc9a4b050f' Feb 12 19:42:45.595714 kubelet[1560]: E0212 19:42:45.595645 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:45.919837 systemd[1]: run-containerd-runc-k8s.io-8744f36dc8dfc1ba949bdaf6aaa56c0fd00ffbb7be66a25c4d264d76fb6e3684-runc.PWsZ85.mount: Deactivated successfully. Feb 12 19:42:46.596452 kubelet[1560]: E0212 19:42:46.596369 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:47.597251 kubelet[1560]: E0212 19:42:47.597189 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:48.598781 kubelet[1560]: E0212 19:42:48.598683 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:49.599571 kubelet[1560]: E0212 19:42:49.599518 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:50.601365 kubelet[1560]: E0212 19:42:50.601308 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:51.370262 nfsidmap[3359]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-d-fc9a4b050f' Feb 12 19:42:51.387000 audit[1282]: AVC avc: denied { watch_reads } for pid=1282 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.389477 kernel: kauditd_printk_skb: 332 callbacks suppressed Feb 12 19:42:51.389616 kernel: audit: type=1400 audit(1707766971.387:295): avc: denied { watch_reads } for pid=1282 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.399488 kernel: audit: type=1300 audit(1707766971.387:295): arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55fc6ae5b450 a2=10 a3=19bf49a1cd05d1ee items=0 ppid=1 pid=1282 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:51.387000 audit[1282]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55fc6ae5b450 a2=10 a3=19bf49a1cd05d1ee items=0 ppid=1 pid=1282 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:51.387000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:42:51.407123 kernel: audit: type=1327 audit(1707766971.387:295): proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:42:51.387000 audit[1282]: AVC avc: denied { watch_reads } for pid=1282 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.387000 audit[1282]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55fc6ae5b450 a2=10 a3=19bf49a1cd05d1ee items=0 ppid=1 pid=1282 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:51.422111 kernel: audit: type=1400 audit(1707766971.387:296): avc: denied { watch_reads } for pid=1282 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.422260 kernel: audit: type=1300 audit(1707766971.387:296): arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55fc6ae5b450 a2=10 a3=19bf49a1cd05d1ee items=0 ppid=1 pid=1282 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:51.422306 kernel: audit: type=1327 audit(1707766971.387:296): proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:42:51.387000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:42:51.399000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.431176 kernel: audit: type=1400 audit(1707766971.399:297): avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.431338 kernel: audit: type=1400 audit(1707766971.399:298): avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.399000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2392 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:42:51.547776 env[1188]: time="2024-02-12T19:42:51.547699231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe,Namespace:default,Attempt:0,}" Feb 12 19:42:51.603131 kubelet[1560]: E0212 19:42:51.603074 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:51.810472 systemd-networkd[1061]: cali5ec59c6bf6e: Link UP Feb 12 19:42:51.815915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:42:51.816107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 12 19:42:51.816228 systemd-networkd[1061]: cali5ec59c6bf6e: Gained carrier Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.665 [INFO][3385] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.132-k8s-test--pod--1-eth0 default cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe 1389 0 2024-02-12 19:42:22 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.151.132 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.666 [INFO][3385] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.718 [INFO][3398] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" HandleID="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Workload="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.742 [INFO][3398] ipam_plugin.go 268: Auto assigning IP ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" HandleID="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Workload="143.198.151.132-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027db60), Attrs:map[string]string{"namespace":"default", "node":"143.198.151.132", "pod":"test-pod-1", "timestamp":"2024-02-12 19:42:51.718138007 +0000 UTC"}, Hostname:"143.198.151.132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.742 [INFO][3398] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.742 [INFO][3398] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.742 [INFO][3398] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.132' Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.747 [INFO][3398] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.758 [INFO][3398] ipam.go 372: Looking up existing affinities for host host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.768 [INFO][3398] ipam.go 489: Trying affinity for 192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.773 [INFO][3398] ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.778 [INFO][3398] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.778 [INFO][3398] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.781 [INFO][3398] ipam.go 1682: Creating new handle: k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780 Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.788 [INFO][3398] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.802 [INFO][3398] ipam.go 1216: Successfully claimed IPs: [192.168.60.196/26] block=192.168.60.192/26 handle="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.802 [INFO][3398] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.196/26] handle="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" host="143.198.151.132" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.802 [INFO][3398] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.802 [INFO][3398] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.60.196/26] IPv6=[] ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" HandleID="k8s-pod-network.7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Workload="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.845533 env[1188]: 2024-02-12 19:42:51.805 [INFO][3385] k8s.go 385: Populated endpoint ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe", ResourceVersion:"1389", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:51.846969 env[1188]: 2024-02-12 19:42:51.805 [INFO][3385] k8s.go 386: Calico CNI using IPs: [192.168.60.196/32] ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.846969 env[1188]: 2024-02-12 19:42:51.805 [INFO][3385] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.846969 env[1188]: 2024-02-12 19:42:51.822 [INFO][3385] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.846969 env[1188]: 2024-02-12 19:42:51.824 [INFO][3385] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.132-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe", ResourceVersion:"1389", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.132", ContainerID:"7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ba:76:8e:20:9c:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:42:51.846969 env[1188]: 2024-02-12 19:42:51.835 [INFO][3385] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.132-k8s-test--pod--1-eth0" Feb 12 19:42:51.856000 audit[3408]: NETFILTER_CFG table=filter:96 family=2 entries=38 op=nft_register_chain pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:51.856000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=19080 a0=3 a1=7ffd4572a140 a2=0 a3=7ffd4572a12c items=0 ppid=2454 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:51.872512 kernel: audit: type=1325 audit(1707766971.856:299): table=filter:96 family=2 entries=38 op=nft_register_chain pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:42:51.872674 kernel: audit: type=1300 audit(1707766971.856:299): arch=c000003e syscall=46 success=yes exit=19080 a0=3 a1=7ffd4572a140 a2=0 a3=7ffd4572a12c items=0 ppid=2454 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:51.856000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:42:51.873865 env[1188]: time="2024-02-12T19:42:51.873770756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:51.874052 env[1188]: time="2024-02-12T19:42:51.873846026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:51.874052 env[1188]: time="2024-02-12T19:42:51.873864399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:51.874255 env[1188]: time="2024-02-12T19:42:51.874109425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780 pid=3425 runtime=io.containerd.runc.v2 Feb 12 19:42:51.967541 env[1188]: time="2024-02-12T19:42:51.967484199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cd421188-6c1a-4e0b-bf41-b2e67ba7a1fe,Namespace:default,Attempt:0,} returns sandbox id \"7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780\"" Feb 12 19:42:51.970090 env[1188]: time="2024-02-12T19:42:51.970015715Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:42:52.525796 env[1188]: time="2024-02-12T19:42:52.525727088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:52.532115 env[1188]: time="2024-02-12T19:42:52.532051875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:52.535963 env[1188]: time="2024-02-12T19:42:52.535905961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:52.540238 env[1188]: time="2024-02-12T19:42:52.540172898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:52.541565 env[1188]: time="2024-02-12T19:42:52.541514983Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:42:52.544878 env[1188]: time="2024-02-12T19:42:52.544820603Z" level=info msg="CreateContainer within sandbox \"7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:42:52.571209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149056220.mount: Deactivated successfully. Feb 12 19:42:52.585308 env[1188]: time="2024-02-12T19:42:52.585248066Z" level=info msg="CreateContainer within sandbox \"7668fd9c619096c91021a928a4d133ecc408eb013eab53a36b7697b083833780\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"69d1311ee49731f979e44bfc3818c1f3ac71f511982c299fc593053a598c4ff9\"" Feb 12 19:42:52.586738 env[1188]: time="2024-02-12T19:42:52.586690225Z" level=info msg="StartContainer for \"69d1311ee49731f979e44bfc3818c1f3ac71f511982c299fc593053a598c4ff9\"" Feb 12 19:42:52.603622 kubelet[1560]: E0212 19:42:52.603564 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:52.672870 env[1188]: time="2024-02-12T19:42:52.672784982Z" level=info msg="StartContainer for \"69d1311ee49731f979e44bfc3818c1f3ac71f511982c299fc593053a598c4ff9\" returns successfully" Feb 12 19:42:53.416821 systemd-networkd[1061]: cali5ec59c6bf6e: Gained IPv6LL Feb 12 19:42:53.603989 kubelet[1560]: E0212 19:42:53.603941 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:53.812993 kubelet[1560]: E0212 19:42:53.812948 1560 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:54.604876 kubelet[1560]: E0212 19:42:54.604776 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:55.605472 kubelet[1560]: E0212 19:42:55.605355 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"